Dec 06 06:24:57 crc systemd[1]: Starting Kubernetes Kubelet... Dec 06 06:24:57 crc restorecon[4687]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:57 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 06:24:58 crc restorecon[4687]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 06:24:58 crc restorecon[4687]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 06 06:24:58 crc kubenswrapper[4823]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 06 06:24:58 crc kubenswrapper[4823]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 06 06:24:58 crc kubenswrapper[4823]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 06 06:24:58 crc kubenswrapper[4823]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 06 06:24:58 crc kubenswrapper[4823]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 06 06:24:58 crc kubenswrapper[4823]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.932073 4823 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.936496 4823 feature_gate.go:330] unrecognized feature gate: Example Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.936576 4823 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.936623 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.936694 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.936740 4823 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.936781 4823 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.936823 4823 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.936866 4823 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.936917 4823 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.936966 4823 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.937011 4823 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.937051 4823 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.937099 4823 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.937145 4823 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.937188 4823 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.937235 4823 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.937277 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.937326 4823 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.937370 4823 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.937417 4823 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.937460 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.937506 4823 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.937553 4823 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.937595 4823 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.937635 4823 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.937692 4823 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.937745 4823 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.937790 4823 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.937836 4823 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.937923 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.937968 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.938014 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.938056 4823 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.938102 4823 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.938144 4823 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.938184 4823 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.938229 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.938272 4823 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.938312 4823 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.938353 4823 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.938393 4823 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.938433 4823 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.938478 4823 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.938520 4823 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.938560 4823 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.938600 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.938640 4823 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.938710 4823 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.938754 4823 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.938803 4823 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.938846 4823 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.938891 4823 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.938937 4823 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.938983 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.939025 4823 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.939066 4823 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.939111 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.939152 4823 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.939197 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.939239 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.939279 4823 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.939319 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.939364 4823 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.939409 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.939451 4823 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.939492 4823 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.939533 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.939574 4823 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.939617 4823 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.939674 4823 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.939727 4823 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.939856 4823 flags.go:64] FLAG: --address="0.0.0.0" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.939918 4823 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.939976 4823 flags.go:64] FLAG: --anonymous-auth="true" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.940021 4823 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.940064 4823 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.940107 4823 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.940159 4823 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.940205 4823 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.940248 4823 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.940294 4823 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.940337 4823 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.940379 4823 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.940422 4823 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.940469 4823 flags.go:64] FLAG: --cgroup-root="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.940514 4823 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.940557 4823 flags.go:64] FLAG: --client-ca-file="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.940599 4823 flags.go:64] FLAG: --cloud-config="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.940644 4823 flags.go:64] FLAG: --cloud-provider="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.940704 4823 flags.go:64] FLAG: --cluster-dns="[]" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.940752 4823 flags.go:64] FLAG: --cluster-domain="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.940794 4823 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.940846 4823 flags.go:64] FLAG: --config-dir="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.940893 4823 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.940936 4823 flags.go:64] FLAG: --container-log-max-files="5" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.940986 4823 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.941034 4823 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.941077 4823 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.941119 4823 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.941165 4823 flags.go:64] FLAG: --contention-profiling="false" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.941208 4823 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.941252 4823 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.941294 4823 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.941336 4823 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.941380 4823 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.941422 4823 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.941469 4823 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.941512 4823 flags.go:64] FLAG: --enable-load-reader="false" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.941554 4823 flags.go:64] FLAG: --enable-server="true" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.941601 4823 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.941648 4823 flags.go:64] FLAG: --event-burst="100" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.941721 4823 flags.go:64] FLAG: --event-qps="50" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.941768 4823 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.941811 4823 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.941883 4823 flags.go:64] FLAG: --eviction-hard="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.941999 4823 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.942046 4823 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.942090 4823 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.942132 4823 flags.go:64] FLAG: --eviction-soft="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.942175 4823 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.942222 4823 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.942266 4823 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.942313 4823 flags.go:64] FLAG: --experimental-mounter-path="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.942356 4823 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.942398 4823 flags.go:64] FLAG: --fail-swap-on="true" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.942441 4823 flags.go:64] FLAG: --feature-gates="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.942485 4823 flags.go:64] FLAG: --file-check-frequency="20s" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.942533 4823 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.942578 4823 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.942626 4823 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.942689 4823 flags.go:64] FLAG: --healthz-port="10248" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.942746 4823 flags.go:64] FLAG: --help="false" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.942793 4823 flags.go:64] FLAG: --hostname-override="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.942835 4823 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.942880 4823 flags.go:64] FLAG: --http-check-frequency="20s" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.942923 4823 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.942970 4823 flags.go:64] FLAG: --image-credential-provider-config="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.943014 4823 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.943063 4823 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.943110 4823 flags.go:64] FLAG: --image-service-endpoint="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.943153 4823 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.943199 4823 flags.go:64] FLAG: --kube-api-burst="100" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.943243 4823 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.943286 4823 flags.go:64] FLAG: --kube-api-qps="50" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.943332 4823 flags.go:64] FLAG: --kube-reserved="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.943379 4823 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.943425 4823 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.943467 4823 flags.go:64] FLAG: --kubelet-cgroups="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.943509 4823 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.943552 4823 flags.go:64] FLAG: --lock-file="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.943596 4823 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.943643 4823 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.943704 4823 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.943752 4823 flags.go:64] FLAG: --log-json-split-stream="false" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.943802 4823 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.943846 4823 flags.go:64] FLAG: --log-text-split-stream="false" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.943889 4823 flags.go:64] FLAG: --logging-format="text" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.943932 4823 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.943980 4823 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.944025 4823 flags.go:64] FLAG: --manifest-url="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.944069 4823 flags.go:64] FLAG: --manifest-url-header="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.944113 4823 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.944155 4823 flags.go:64] FLAG: --max-open-files="1000000" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.944203 4823 flags.go:64] FLAG: --max-pods="110" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.944246 4823 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.944293 4823 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.944336 4823 flags.go:64] FLAG: --memory-manager-policy="None" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.944378 4823 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.944420 4823 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.944462 4823 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.944504 4823 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.944555 4823 flags.go:64] FLAG: --node-status-max-images="50" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.944602 4823 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.944645 4823 flags.go:64] FLAG: --oom-score-adj="-999" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.944720 4823 flags.go:64] FLAG: --pod-cidr="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.944776 4823 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.944833 4823 flags.go:64] FLAG: --pod-manifest-path="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.944876 4823 flags.go:64] FLAG: --pod-max-pids="-1" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.944919 4823 flags.go:64] FLAG: --pods-per-core="0" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.944977 4823 flags.go:64] FLAG: --port="10250" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.945027 4823 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.945075 4823 flags.go:64] FLAG: --provider-id="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.945119 4823 flags.go:64] FLAG: --qos-reserved="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.945162 4823 flags.go:64] FLAG: --read-only-port="10255" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.945204 4823 flags.go:64] FLAG: --register-node="true" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.945248 4823 flags.go:64] FLAG: --register-schedulable="true" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.945296 4823 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.945344 4823 flags.go:64] FLAG: --registry-burst="10" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.945386 4823 flags.go:64] FLAG: --registry-qps="5" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.945428 4823 flags.go:64] FLAG: --reserved-cpus="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.945474 4823 flags.go:64] FLAG: --reserved-memory="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.945519 4823 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.945566 4823 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.945609 4823 flags.go:64] FLAG: --rotate-certificates="false" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.945655 4823 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.945716 4823 flags.go:64] FLAG: --runonce="false" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.945760 4823 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.945803 4823 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.945846 4823 flags.go:64] FLAG: --seccomp-default="false" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.945932 4823 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.945977 4823 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.946027 4823 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.946071 4823 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.946113 4823 flags.go:64] FLAG: --storage-driver-password="root" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.946155 4823 flags.go:64] FLAG: --storage-driver-secure="false" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.946197 4823 flags.go:64] FLAG: --storage-driver-table="stats" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.946244 4823 flags.go:64] FLAG: --storage-driver-user="root" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.946291 4823 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.946341 4823 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.946385 4823 flags.go:64] FLAG: --system-cgroups="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.946427 4823 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.946473 4823 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.946515 4823 flags.go:64] FLAG: --tls-cert-file="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.946557 4823 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.946600 4823 flags.go:64] FLAG: --tls-min-version="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.946647 4823 flags.go:64] FLAG: --tls-private-key-file="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.946707 4823 flags.go:64] FLAG: --topology-manager-policy="none" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.946752 4823 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.946795 4823 flags.go:64] FLAG: --topology-manager-scope="container" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.946838 4823 flags.go:64] FLAG: --v="2" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.946883 4823 flags.go:64] FLAG: --version="false" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.946939 4823 flags.go:64] FLAG: --vmodule="" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.947002 4823 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.947048 4823 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.947249 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.947304 4823 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.947348 4823 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.947391 4823 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.947435 4823 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.947483 4823 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.947532 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.947575 4823 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.947618 4823 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.947696 4823 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.947745 4823 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.947787 4823 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.947842 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.947888 4823 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.947929 4823 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.947971 4823 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.948013 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.948054 4823 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.948101 4823 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.948147 4823 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.948190 4823 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.948231 4823 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.948272 4823 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.948314 4823 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.948355 4823 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.948402 4823 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.948449 4823 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.948494 4823 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.948536 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.948578 4823 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.948619 4823 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.948675 4823 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.948722 4823 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.948796 4823 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.948844 4823 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.948891 4823 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.948932 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.948973 4823 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.949014 4823 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.949055 4823 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.949103 4823 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.949146 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.949188 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.949237 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.949281 4823 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.949336 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.949391 4823 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.949442 4823 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.949485 4823 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.949527 4823 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.949569 4823 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.949610 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.949652 4823 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.949713 4823 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.949770 4823 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.949815 4823 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.949872 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.949921 4823 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.949968 4823 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.950012 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.950053 4823 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.950113 4823 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.950169 4823 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.950215 4823 feature_gate.go:330] unrecognized feature gate: Example Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.950270 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.950351 4823 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.950410 4823 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.950455 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.950503 4823 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.950548 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.950592 4823 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.950643 4823 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.959809 4823 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.959895 4823 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.959998 4823 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960012 4823 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960019 4823 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960026 4823 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960031 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960037 4823 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960043 4823 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960048 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960055 4823 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960061 4823 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960065 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960069 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960073 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960078 4823 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960082 4823 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960086 4823 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960090 4823 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960094 4823 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960098 4823 feature_gate.go:330] unrecognized feature gate: Example Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960102 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960107 4823 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960113 4823 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960120 4823 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960127 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960133 4823 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960138 4823 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960143 4823 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960149 4823 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960159 4823 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960165 4823 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960171 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960178 4823 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960184 4823 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960192 4823 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960205 4823 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960212 4823 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960218 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960225 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960230 4823 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960236 4823 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960241 4823 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960246 4823 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960251 4823 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960256 4823 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960262 4823 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960267 4823 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960271 4823 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960277 4823 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960282 4823 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960287 4823 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960291 4823 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960297 4823 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960302 4823 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960307 4823 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960314 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960319 4823 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960323 4823 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960329 4823 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960334 4823 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960338 4823 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960344 4823 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960348 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960353 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960359 4823 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960363 4823 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960367 4823 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960371 4823 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960375 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960380 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960383 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960399 4823 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.960407 4823 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960555 4823 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960565 4823 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960569 4823 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960573 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960578 4823 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960583 4823 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960587 4823 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960591 4823 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960596 4823 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960601 4823 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960605 4823 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960608 4823 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960613 4823 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960619 4823 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960624 4823 feature_gate.go:330] unrecognized feature gate: Example Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960629 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960633 4823 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960636 4823 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960641 4823 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960646 4823 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960650 4823 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960654 4823 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960693 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960699 4823 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960704 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960709 4823 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960713 4823 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960717 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960722 4823 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960726 4823 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960730 4823 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960734 4823 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960738 4823 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960741 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960746 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960750 4823 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960754 4823 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960757 4823 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960761 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960764 4823 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960768 4823 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960772 4823 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960775 4823 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960779 4823 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960782 4823 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960786 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960792 4823 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960798 4823 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960803 4823 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960808 4823 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960814 4823 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960819 4823 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960824 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960829 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960833 4823 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960837 4823 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960840 4823 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960844 4823 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960847 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960851 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960854 4823 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960860 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960864 4823 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960867 4823 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960871 4823 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960874 4823 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960878 4823 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960881 4823 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960885 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960888 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 06 06:24:58 crc kubenswrapper[4823]: W1206 06:24:58.960896 4823 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.960904 4823 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.961144 4823 server.go:940] "Client rotation is on, will bootstrap in background" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.975132 4823 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.975274 4823 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.975922 4823 server.go:997] "Starting client certificate rotation" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.975952 4823 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.976147 4823 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-20 16:56:43.501550441 +0000 UTC Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.976247 4823 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 346h31m44.525305752s for next certificate rotation Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.992000 4823 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 06 06:24:58 crc kubenswrapper[4823]: I1206 06:24:58.994024 4823 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.007710 4823 log.go:25] "Validated CRI v1 runtime API" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.031971 4823 log.go:25] "Validated CRI v1 image API" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.033593 4823 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.036344 4823 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-06-06-20-01-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.036389 4823 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.052307 4823 manager.go:217] Machine: {Timestamp:2025-12-06 06:24:59.050794026 +0000 UTC m=+0.336545996 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:41501b97-4373-424f-8e6e-d4f001bb3d11 BootID:120eea9f-209d-4622-89eb-9d0194df90a2 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:69:ff:de Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:69:ff:de Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:f4:b2:a6 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:2b:e0:0c Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:3a:e0:27 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:50:52:74 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:76:a0:55:71:0c:2a Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:be:2d:b3:00:54:bb Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.052540 4823 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.052704 4823 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.053312 4823 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.053517 4823 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.053559 4823 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.053810 4823 topology_manager.go:138] "Creating topology manager with none policy" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.053823 4823 container_manager_linux.go:303] "Creating device plugin manager" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.054106 4823 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.054150 4823 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.054359 4823 state_mem.go:36] "Initialized new in-memory state store" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.054520 4823 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.055358 4823 kubelet.go:418] "Attempting to sync node with API server" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.055414 4823 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.055439 4823 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.055453 4823 kubelet.go:324] "Adding apiserver pod source" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.055472 4823 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.058080 4823 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 06 06:24:59 crc kubenswrapper[4823]: W1206 06:24:59.058403 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Dec 06 06:24:59 crc kubenswrapper[4823]: E1206 06:24:59.058514 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.65:6443: connect: connection refused" logger="UnhandledError" Dec 06 06:24:59 crc kubenswrapper[4823]: W1206 06:24:59.058460 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Dec 06 06:24:59 crc kubenswrapper[4823]: E1206 06:24:59.058588 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.65:6443: connect: connection refused" logger="UnhandledError" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.058725 4823 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.062833 4823 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.064936 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.065619 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.065653 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.065688 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.065736 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.065765 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.065777 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.065793 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.065815 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.065827 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.072897 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.073478 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.074299 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.074883 4823 server.go:1280] "Started kubelet" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.075730 4823 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.076233 4823 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.076265 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.075353 4823 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 06 06:24:59 crc systemd[1]: Started Kubernetes Kubelet. Dec 06 06:24:59 crc kubenswrapper[4823]: E1206 06:24:59.078193 4823 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.65:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187e8c3d581a696a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-06 06:24:59.074832746 +0000 UTC m=+0.360584706,LastTimestamp:2025-12-06 06:24:59.074832746 +0000 UTC m=+0.360584706,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.079309 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.079385 4823 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.079421 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 10:56:21.943498758 +0000 UTC Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.079485 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 484h31m22.864016229s for next certificate rotation Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.079512 4823 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.079526 4823 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 06 06:24:59 crc kubenswrapper[4823]: E1206 06:24:59.079589 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.079984 4823 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 06 06:24:59 crc kubenswrapper[4823]: W1206 06:24:59.080432 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Dec 06 06:24:59 crc kubenswrapper[4823]: E1206 06:24:59.080531 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.65:6443: connect: connection refused" logger="UnhandledError" Dec 06 06:24:59 crc kubenswrapper[4823]: E1206 06:24:59.081109 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="200ms" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.081436 4823 server.go:460] "Adding debug handlers to kubelet server" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.081773 4823 factory.go:153] Registering CRI-O factory Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.082087 4823 factory.go:221] Registration of the crio container factory successfully Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.082224 4823 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.082247 4823 factory.go:55] Registering systemd factory Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.082259 4823 factory.go:221] Registration of the systemd container factory successfully Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.082295 4823 factory.go:103] Registering Raw factory Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.082313 4823 manager.go:1196] Started watching for new ooms in manager Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.083481 4823 manager.go:319] Starting recovery of all containers Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103208 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103301 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103317 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103353 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103368 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103384 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103400 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103438 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103456 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103469 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103504 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103526 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103542 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103558 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103588 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103603 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103615 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103629 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103711 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103745 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103761 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103773 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103784 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103825 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103853 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103872 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103910 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103927 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103938 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103954 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103981 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.103994 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104010 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104024 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104040 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104090 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104105 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104118 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104147 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104162 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104173 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104186 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104198 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104229 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104243 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104256 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104268 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104281 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104310 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104322 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104358 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104387 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104424 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104437 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104468 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104490 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104514 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104563 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104581 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104593 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104609 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104642 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104678 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104694 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104708 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104721 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104735 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104772 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104792 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104807 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104842 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104860 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104873 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104887 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104928 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104943 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104960 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.104984 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105040 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105058 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105071 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105103 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105117 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105129 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105142 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105182 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105205 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105221 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105260 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105276 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105290 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105305 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105339 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105353 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105366 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105381 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105419 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105435 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105456 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105469 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105516 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105533 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105552 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105589 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105614 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105630 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105679 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105698 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105714 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105734 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105776 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105793 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105809 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105880 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105922 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105940 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105959 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.105973 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106012 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106031 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106049 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106089 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106106 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106121 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106135 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106173 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106191 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106207 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106221 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106258 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106277 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106291 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106331 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106350 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106367 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106380 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106414 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106427 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106443 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106456 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106491 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106508 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106522 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106540 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106577 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106594 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106608 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106622 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106687 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106704 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106739 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106754 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106769 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106781 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106819 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106837 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106852 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106867 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106904 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106919 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106934 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106947 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.106990 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107009 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107026 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107066 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107081 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107096 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107109 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107122 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107162 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107181 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107197 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107233 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107249 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107263 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107277 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107316 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107333 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107348 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107366 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107402 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107420 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107434 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107449 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107483 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107497 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107511 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107524 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107537 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107571 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107583 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107595 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107608 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107639 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107139 4823 manager.go:324] Recovery completed Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107653 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107727 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107742 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107780 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107794 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107808 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107821 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107858 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107873 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107887 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.107904 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.108805 4823 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.108833 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.108869 4823 reconstruct.go:97] "Volume reconstruction finished" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.108879 4823 reconciler.go:26] "Reconciler: start to sync state" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.125106 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.127637 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.127713 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.127729 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.128537 4823 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.128574 4823 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.128607 4823 state_mem.go:36] "Initialized new in-memory state store" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.136620 4823 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.139390 4823 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.139444 4823 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.139470 4823 kubelet.go:2335] "Starting kubelet main sync loop" Dec 06 06:24:59 crc kubenswrapper[4823]: E1206 06:24:59.139525 4823 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 06 06:24:59 crc kubenswrapper[4823]: W1206 06:24:59.140446 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Dec 06 06:24:59 crc kubenswrapper[4823]: E1206 06:24:59.140526 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.65:6443: connect: connection refused" logger="UnhandledError" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.146160 4823 policy_none.go:49] "None policy: Start" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.147326 4823 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.147375 4823 state_mem.go:35] "Initializing new in-memory state store" Dec 06 06:24:59 crc kubenswrapper[4823]: E1206 06:24:59.180600 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.203009 4823 manager.go:334] "Starting Device Plugin manager" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.203102 4823 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.203118 4823 server.go:79] "Starting device plugin registration server" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.203551 4823 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.203575 4823 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.203742 4823 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.203960 4823 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.203977 4823 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 06 06:24:59 crc kubenswrapper[4823]: E1206 06:24:59.214890 4823 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.240186 4823 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.240356 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.242313 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.242371 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.242380 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.242570 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.242948 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.243031 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.243789 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.243821 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.243832 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.244035 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.244061 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.244081 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.244090 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.244171 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.244209 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.245680 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.245719 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.245731 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.245718 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.245896 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.245909 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.246138 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.246304 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.246342 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.247216 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.247234 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.247263 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.247271 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.247282 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.247272 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.247424 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.247937 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.247993 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.249256 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.249281 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.249291 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.250777 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.251460 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.251519 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.251542 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.252435 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.254578 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.254604 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.254615 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:24:59 crc kubenswrapper[4823]: E1206 06:24:59.281586 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="400ms" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.304411 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.306081 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.306167 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.306184 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.306227 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 06 06:24:59 crc kubenswrapper[4823]: E1206 06:24:59.306964 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.65:6443: connect: connection refused" node="crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.311114 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.311158 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.311183 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.311252 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.311333 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.311386 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.311510 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.311595 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.311654 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.311761 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.311780 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.311807 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.311867 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.311895 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.311912 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.413039 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.413303 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.413326 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.413216 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.413411 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.413445 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.413567 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.413583 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.413602 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.413630 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.413650 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.413546 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.413712 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.413631 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.413741 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.413653 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.413781 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.413786 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.413733 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.413752 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.413913 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.413979 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.414006 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.414038 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.414012 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.414067 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.414091 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.414098 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.414135 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.414232 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.507752 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.510713 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.510819 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.510838 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.510871 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 06 06:24:59 crc kubenswrapper[4823]: E1206 06:24:59.511526 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.65:6443: connect: connection refused" node="crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.580534 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.596375 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: W1206 06:24:59.609382 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-3a2af51de1baa1777cc64c661dbf6a28137ca827b4140094ce6153b8e99076a5 WatchSource:0}: Error finding container 3a2af51de1baa1777cc64c661dbf6a28137ca827b4140094ce6153b8e99076a5: Status 404 returned error can't find the container with id 3a2af51de1baa1777cc64c661dbf6a28137ca827b4140094ce6153b8e99076a5 Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.615886 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: W1206 06:24:59.619865 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-153fb0922902f898aa3406104b3e9ba7d8c1a3c043d5bfdc48e1dc182107c72a WatchSource:0}: Error finding container 153fb0922902f898aa3406104b3e9ba7d8c1a3c043d5bfdc48e1dc182107c72a: Status 404 returned error can't find the container with id 153fb0922902f898aa3406104b3e9ba7d8c1a3c043d5bfdc48e1dc182107c72a Dec 06 06:24:59 crc kubenswrapper[4823]: W1206 06:24:59.634770 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-89efdf52133e550b2a4091b81f1fca393f867b4256e98fe8183787a9d0fcf2b9 WatchSource:0}: Error finding container 89efdf52133e550b2a4091b81f1fca393f867b4256e98fe8183787a9d0fcf2b9: Status 404 returned error can't find the container with id 89efdf52133e550b2a4091b81f1fca393f867b4256e98fe8183787a9d0fcf2b9 Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.638779 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.644900 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 06:24:59 crc kubenswrapper[4823]: W1206 06:24:59.658173 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-015e680417987b03bbf5646dbd20a85b072f254eb6f9b7ae4513805ad4aa2038 WatchSource:0}: Error finding container 015e680417987b03bbf5646dbd20a85b072f254eb6f9b7ae4513805ad4aa2038: Status 404 returned error can't find the container with id 015e680417987b03bbf5646dbd20a85b072f254eb6f9b7ae4513805ad4aa2038 Dec 06 06:24:59 crc kubenswrapper[4823]: W1206 06:24:59.664213 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-06e4c5ae49e311baee93183a754acffb3c7fdba1d372cbfc93a8807e71d6b880 WatchSource:0}: Error finding container 06e4c5ae49e311baee93183a754acffb3c7fdba1d372cbfc93a8807e71d6b880: Status 404 returned error can't find the container with id 06e4c5ae49e311baee93183a754acffb3c7fdba1d372cbfc93a8807e71d6b880 Dec 06 06:24:59 crc kubenswrapper[4823]: E1206 06:24:59.683340 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="800ms" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.912635 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.914067 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.914121 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.914139 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:24:59 crc kubenswrapper[4823]: I1206 06:24:59.914169 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 06 06:24:59 crc kubenswrapper[4823]: E1206 06:24:59.914640 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.65:6443: connect: connection refused" node="crc" Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.078054 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.146951 4823 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="d9f5f78905b2006625ba4f2b358eb6b341f8c89f7a3de175316f4609b35e86e6" exitCode=0 Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.147051 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"d9f5f78905b2006625ba4f2b358eb6b341f8c89f7a3de175316f4609b35e86e6"} Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.147254 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"06e4c5ae49e311baee93183a754acffb3c7fdba1d372cbfc93a8807e71d6b880"} Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.147419 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.149139 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.149172 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.149182 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.149840 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa"} Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.149935 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"015e680417987b03bbf5646dbd20a85b072f254eb6f9b7ae4513805ad4aa2038"} Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.152950 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66" exitCode=0 Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.153040 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66"} Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.153097 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"89efdf52133e550b2a4091b81f1fca393f867b4256e98fe8183787a9d0fcf2b9"} Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.153231 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.155623 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.155686 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.155705 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.155805 4823 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="068df01b0df4a4c9620db847f1c55e810abdf0f0c11d560579dba27ad19395aa" exitCode=0 Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.155887 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"068df01b0df4a4c9620db847f1c55e810abdf0f0c11d560579dba27ad19395aa"} Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.155932 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"153fb0922902f898aa3406104b3e9ba7d8c1a3c043d5bfdc48e1dc182107c72a"} Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.156073 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.157048 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.157084 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.157099 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.157582 4823 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="f2dfda516e8235398208f69d2b7956f835261cc7f3211a81d9bda3d4e46a827c" exitCode=0 Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.157610 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"f2dfda516e8235398208f69d2b7956f835261cc7f3211a81d9bda3d4e46a827c"} Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.157628 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3a2af51de1baa1777cc64c661dbf6a28137ca827b4140094ce6153b8e99076a5"} Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.157874 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.157937 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.159079 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.159099 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.159109 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.159264 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.159302 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.159315 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:00 crc kubenswrapper[4823]: W1206 06:25:00.312896 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Dec 06 06:25:00 crc kubenswrapper[4823]: E1206 06:25:00.312978 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.65:6443: connect: connection refused" logger="UnhandledError" Dec 06 06:25:00 crc kubenswrapper[4823]: W1206 06:25:00.313041 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Dec 06 06:25:00 crc kubenswrapper[4823]: E1206 06:25:00.313083 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.65:6443: connect: connection refused" logger="UnhandledError" Dec 06 06:25:00 crc kubenswrapper[4823]: W1206 06:25:00.380295 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Dec 06 06:25:00 crc kubenswrapper[4823]: E1206 06:25:00.380414 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.65:6443: connect: connection refused" logger="UnhandledError" Dec 06 06:25:00 crc kubenswrapper[4823]: W1206 06:25:00.396226 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Dec 06 06:25:00 crc kubenswrapper[4823]: E1206 06:25:00.396315 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.65:6443: connect: connection refused" logger="UnhandledError" Dec 06 06:25:00 crc kubenswrapper[4823]: E1206 06:25:00.484991 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="1.6s" Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.715337 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.726882 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.726934 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.726952 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:00 crc kubenswrapper[4823]: I1206 06:25:00.726995 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 06 06:25:00 crc kubenswrapper[4823]: E1206 06:25:00.727803 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.65:6443: connect: connection refused" node="crc" Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.077561 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.171735 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"b7dddf8f5ca6bb9db03f8bca5c6dcdc673b2038b8e45de295442962742b37ca0"} Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.171976 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.173993 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.174018 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.174027 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.177431 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a277c59c5d4f466d6f64fd8243e1c2bdd0b10dafb7041876c073a8671bdcd4ce"} Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.177503 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a2dba2c0e710e8afd67e78b787d4caf972c1dbd9c20c7d4a263a6c104c7e07b7"} Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.177532 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"96825dd91cbf6e77075365211e7d310ec7f14d6e4045eff0195c70f2f6447185"} Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.177746 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.186012 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.186052 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.186061 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.191719 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6"} Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.191796 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad"} Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.191810 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1"} Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.191908 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.193358 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.193402 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.193414 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.195613 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0"} Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.195681 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104"} Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.195699 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d"} Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.196987 4823 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e6b735c6aee2ef57cae08db96cb8678c2db7bb089e6d834af399a271cb42b1ec" exitCode=0 Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.197028 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e6b735c6aee2ef57cae08db96cb8678c2db7bb089e6d834af399a271cb42b1ec"} Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.197183 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.198183 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.198226 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:01 crc kubenswrapper[4823]: I1206 06:25:01.198245 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:02 crc kubenswrapper[4823]: I1206 06:25:02.204818 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:02 crc kubenswrapper[4823]: I1206 06:25:02.204802 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a"} Dec 06 06:25:02 crc kubenswrapper[4823]: I1206 06:25:02.204977 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8"} Dec 06 06:25:02 crc kubenswrapper[4823]: I1206 06:25:02.206589 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:02 crc kubenswrapper[4823]: I1206 06:25:02.206634 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:02 crc kubenswrapper[4823]: I1206 06:25:02.206653 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:02 crc kubenswrapper[4823]: I1206 06:25:02.209128 4823 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="02e79e87ab95ae04d012164a174310251c4180027db7e93c099abca160d14387" exitCode=0 Dec 06 06:25:02 crc kubenswrapper[4823]: I1206 06:25:02.209278 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:02 crc kubenswrapper[4823]: I1206 06:25:02.209276 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"02e79e87ab95ae04d012164a174310251c4180027db7e93c099abca160d14387"} Dec 06 06:25:02 crc kubenswrapper[4823]: I1206 06:25:02.209587 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:02 crc kubenswrapper[4823]: I1206 06:25:02.211435 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:02 crc kubenswrapper[4823]: I1206 06:25:02.211561 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:02 crc kubenswrapper[4823]: I1206 06:25:02.211580 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:02 crc kubenswrapper[4823]: I1206 06:25:02.213102 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:02 crc kubenswrapper[4823]: I1206 06:25:02.213135 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:02 crc kubenswrapper[4823]: I1206 06:25:02.213148 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:02 crc kubenswrapper[4823]: I1206 06:25:02.328161 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:02 crc kubenswrapper[4823]: I1206 06:25:02.329554 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:02 crc kubenswrapper[4823]: I1206 06:25:02.329589 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:02 crc kubenswrapper[4823]: I1206 06:25:02.329599 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:02 crc kubenswrapper[4823]: I1206 06:25:02.329620 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 06 06:25:02 crc kubenswrapper[4823]: I1206 06:25:02.686405 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:25:03 crc kubenswrapper[4823]: I1206 06:25:03.097497 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:25:03 crc kubenswrapper[4823]: I1206 06:25:03.217706 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"119636b5dca7ee9f74a9ada5915075258cd60415c91821773b2fa6a7f7d65fb9"} Dec 06 06:25:03 crc kubenswrapper[4823]: I1206 06:25:03.217794 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c695cb9cfc29a535e5a779a01210a47dfb59a8b43e9d23042ef8473ef8f4be41"} Dec 06 06:25:03 crc kubenswrapper[4823]: I1206 06:25:03.217816 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"20b50bfc1670dc85eba3b4f030e2ec7ce2601071e09f05ec41b26aeaed86a4d6"} Dec 06 06:25:03 crc kubenswrapper[4823]: I1206 06:25:03.217827 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4771c6bd312f06b43b3a5e253a7750765f42f65fff8ea7a0e28662396368b018"} Dec 06 06:25:03 crc kubenswrapper[4823]: I1206 06:25:03.217838 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"89a933fa44a48df8609a19f1f1230d91f9a691b4bda95e4def09ff61e0c5cac0"} Dec 06 06:25:03 crc kubenswrapper[4823]: I1206 06:25:03.218025 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:03 crc kubenswrapper[4823]: I1206 06:25:03.218357 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:03 crc kubenswrapper[4823]: I1206 06:25:03.219288 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:03 crc kubenswrapper[4823]: I1206 06:25:03.219380 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:03 crc kubenswrapper[4823]: I1206 06:25:03.219436 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:03 crc kubenswrapper[4823]: I1206 06:25:03.219538 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:03 crc kubenswrapper[4823]: I1206 06:25:03.219571 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:03 crc kubenswrapper[4823]: I1206 06:25:03.219583 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:03 crc kubenswrapper[4823]: I1206 06:25:03.679838 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:25:03 crc kubenswrapper[4823]: I1206 06:25:03.680004 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:03 crc kubenswrapper[4823]: I1206 06:25:03.681082 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:03 crc kubenswrapper[4823]: I1206 06:25:03.681109 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:03 crc kubenswrapper[4823]: I1206 06:25:03.681118 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:04 crc kubenswrapper[4823]: I1206 06:25:04.219917 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:04 crc kubenswrapper[4823]: I1206 06:25:04.221220 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:04 crc kubenswrapper[4823]: I1206 06:25:04.221281 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:04 crc kubenswrapper[4823]: I1206 06:25:04.221290 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:04 crc kubenswrapper[4823]: I1206 06:25:04.947049 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:25:05 crc kubenswrapper[4823]: I1206 06:25:05.222327 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:05 crc kubenswrapper[4823]: I1206 06:25:05.223709 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:05 crc kubenswrapper[4823]: I1206 06:25:05.223759 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:05 crc kubenswrapper[4823]: I1206 06:25:05.223772 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:05 crc kubenswrapper[4823]: I1206 06:25:05.423837 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 06 06:25:05 crc kubenswrapper[4823]: I1206 06:25:05.424016 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:05 crc kubenswrapper[4823]: I1206 06:25:05.424997 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:05 crc kubenswrapper[4823]: I1206 06:25:05.425064 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:05 crc kubenswrapper[4823]: I1206 06:25:05.425078 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:05 crc kubenswrapper[4823]: I1206 06:25:05.546339 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:25:05 crc kubenswrapper[4823]: I1206 06:25:05.546550 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:05 crc kubenswrapper[4823]: I1206 06:25:05.547944 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:05 crc kubenswrapper[4823]: I1206 06:25:05.548005 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:05 crc kubenswrapper[4823]: I1206 06:25:05.548017 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:05 crc kubenswrapper[4823]: I1206 06:25:05.551345 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:25:06 crc kubenswrapper[4823]: I1206 06:25:06.225562 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:06 crc kubenswrapper[4823]: I1206 06:25:06.226678 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:06 crc kubenswrapper[4823]: I1206 06:25:06.226715 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:06 crc kubenswrapper[4823]: I1206 06:25:06.226726 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:06 crc kubenswrapper[4823]: I1206 06:25:06.631767 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:25:06 crc kubenswrapper[4823]: I1206 06:25:06.994942 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 06 06:25:06 crc kubenswrapper[4823]: I1206 06:25:06.995131 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:06 crc kubenswrapper[4823]: I1206 06:25:06.996302 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:06 crc kubenswrapper[4823]: I1206 06:25:06.996350 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:06 crc kubenswrapper[4823]: I1206 06:25:06.996362 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:07 crc kubenswrapper[4823]: I1206 06:25:07.228443 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:07 crc kubenswrapper[4823]: I1206 06:25:07.230124 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:07 crc kubenswrapper[4823]: I1206 06:25:07.230185 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:07 crc kubenswrapper[4823]: I1206 06:25:07.230201 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:08 crc kubenswrapper[4823]: I1206 06:25:08.857361 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 06:25:08 crc kubenswrapper[4823]: I1206 06:25:08.857573 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:08 crc kubenswrapper[4823]: I1206 06:25:08.858897 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:08 crc kubenswrapper[4823]: I1206 06:25:08.858947 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:08 crc kubenswrapper[4823]: I1206 06:25:08.858962 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:09 crc kubenswrapper[4823]: E1206 06:25:09.216001 4823 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 06 06:25:10 crc kubenswrapper[4823]: I1206 06:25:10.908285 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:25:10 crc kubenswrapper[4823]: I1206 06:25:10.908483 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:10 crc kubenswrapper[4823]: I1206 06:25:10.909854 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:10 crc kubenswrapper[4823]: I1206 06:25:10.909890 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:10 crc kubenswrapper[4823]: I1206 06:25:10.909900 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:10 crc kubenswrapper[4823]: I1206 06:25:10.913246 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:25:11 crc kubenswrapper[4823]: I1206 06:25:11.238254 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:11 crc kubenswrapper[4823]: I1206 06:25:11.239531 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:11 crc kubenswrapper[4823]: I1206 06:25:11.239601 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:11 crc kubenswrapper[4823]: I1206 06:25:11.239618 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:12 crc kubenswrapper[4823]: I1206 06:25:12.077097 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 06 06:25:12 crc kubenswrapper[4823]: E1206 06:25:12.086719 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Dec 06 06:25:12 crc kubenswrapper[4823]: W1206 06:25:12.094896 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 06 06:25:12 crc kubenswrapper[4823]: I1206 06:25:12.094984 4823 trace.go:236] Trace[2146232865]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Dec-2025 06:25:02.093) (total time: 10001ms): Dec 06 06:25:12 crc kubenswrapper[4823]: Trace[2146232865]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:25:12.094) Dec 06 06:25:12 crc kubenswrapper[4823]: Trace[2146232865]: [10.001736301s] [10.001736301s] END Dec 06 06:25:12 crc kubenswrapper[4823]: E1206 06:25:12.095009 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 06 06:25:12 crc kubenswrapper[4823]: E1206 06:25:12.331484 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 06 06:25:12 crc kubenswrapper[4823]: W1206 06:25:12.479068 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 06 06:25:12 crc kubenswrapper[4823]: I1206 06:25:12.479175 4823 trace.go:236] Trace[700657783]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Dec-2025 06:25:02.478) (total time: 10001ms): Dec 06 06:25:12 crc kubenswrapper[4823]: Trace[700657783]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:25:12.479) Dec 06 06:25:12 crc kubenswrapper[4823]: Trace[700657783]: [10.001134762s] [10.001134762s] END Dec 06 06:25:12 crc kubenswrapper[4823]: E1206 06:25:12.479200 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 06 06:25:12 crc kubenswrapper[4823]: I1206 06:25:12.692764 4823 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 06 06:25:12 crc kubenswrapper[4823]: I1206 06:25:12.692840 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 06 06:25:12 crc kubenswrapper[4823]: I1206 06:25:12.707844 4823 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 06 06:25:12 crc kubenswrapper[4823]: I1206 06:25:12.707909 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 06 06:25:13 crc kubenswrapper[4823]: I1206 06:25:13.104884 4823 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]log ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]etcd ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/generic-apiserver-start-informers ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/priority-and-fairness-filter ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/start-apiextensions-informers ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/start-apiextensions-controllers ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/crd-informer-synced ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/start-system-namespaces-controller ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 06 06:25:13 crc kubenswrapper[4823]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 06 06:25:13 crc kubenswrapper[4823]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/bootstrap-controller ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/start-kube-aggregator-informers ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/apiservice-registration-controller ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/apiservice-discovery-controller ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]autoregister-completion ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/apiservice-openapi-controller ok Dec 06 06:25:13 crc kubenswrapper[4823]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 06 06:25:13 crc kubenswrapper[4823]: livez check failed Dec 06 06:25:13 crc kubenswrapper[4823]: I1206 06:25:13.104969 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:25:13 crc kubenswrapper[4823]: I1206 06:25:13.908526 4823 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 06:25:13 crc kubenswrapper[4823]: I1206 06:25:13.909030 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 06 06:25:15 crc kubenswrapper[4823]: I1206 06:25:15.449769 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 06 06:25:15 crc kubenswrapper[4823]: I1206 06:25:15.449948 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:15 crc kubenswrapper[4823]: I1206 06:25:15.451140 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:15 crc kubenswrapper[4823]: I1206 06:25:15.451297 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:15 crc kubenswrapper[4823]: I1206 06:25:15.451316 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:15 crc kubenswrapper[4823]: I1206 06:25:15.463031 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 06 06:25:15 crc kubenswrapper[4823]: I1206 06:25:15.531873 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:15 crc kubenswrapper[4823]: I1206 06:25:15.533022 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:15 crc kubenswrapper[4823]: I1206 06:25:15.533060 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:15 crc kubenswrapper[4823]: I1206 06:25:15.533070 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:15 crc kubenswrapper[4823]: I1206 06:25:15.533097 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 06 06:25:15 crc kubenswrapper[4823]: E1206 06:25:15.536455 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 06 06:25:16 crc kubenswrapper[4823]: I1206 06:25:16.249990 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:16 crc kubenswrapper[4823]: I1206 06:25:16.252290 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:16 crc kubenswrapper[4823]: I1206 06:25:16.252340 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:16 crc kubenswrapper[4823]: I1206 06:25:16.252351 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:16 crc kubenswrapper[4823]: I1206 06:25:16.554195 4823 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 06 06:25:16 crc kubenswrapper[4823]: I1206 06:25:16.995035 4823 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.066238 4823 apiserver.go:52] "Watching apiserver" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.072356 4823 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.072797 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.073275 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.073324 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:17 crc kubenswrapper[4823]: E1206 06:25:17.073419 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.073703 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.073820 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.074356 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.074368 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:17 crc kubenswrapper[4823]: E1206 06:25:17.074422 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:25:17 crc kubenswrapper[4823]: E1206 06:25:17.074556 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.078891 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.078936 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.078999 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.079006 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.079127 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.078906 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.079763 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.079902 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.081921 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.082379 4823 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.107099 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.121041 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.133539 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.147028 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.156722 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.166997 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.175416 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.186264 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.195790 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.206513 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.697956 4823 trace.go:236] Trace[270258131]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Dec-2025 06:25:03.116) (total time: 14580ms): Dec 06 06:25:17 crc kubenswrapper[4823]: Trace[270258131]: ---"Objects listed" error: 14580ms (06:25:17.697) Dec 06 06:25:17 crc kubenswrapper[4823]: Trace[270258131]: [14.580891671s] [14.580891671s] END Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.698026 4823 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.699316 4823 trace.go:236] Trace[1777948864]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Dec-2025 06:25:03.204) (total time: 14494ms): Dec 06 06:25:17 crc kubenswrapper[4823]: Trace[1777948864]: ---"Objects listed" error: 14494ms (06:25:17.699) Dec 06 06:25:17 crc kubenswrapper[4823]: Trace[1777948864]: [14.494454219s] [14.494454219s] END Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.699360 4823 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.699517 4823 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.748006 4823 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:59608->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.748454 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:59608->192.168.126.11:17697: read: connection reset by peer" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.800521 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.800604 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.800636 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.800685 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.800710 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.800734 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.800765 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.800808 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.800838 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.800864 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.800897 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.800924 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.800955 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801046 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801042 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801076 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801041 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801107 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801181 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801189 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801212 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801199 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801245 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801233 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801273 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801254 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801300 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801326 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801349 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801380 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801405 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801427 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801483 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801484 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801510 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801536 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801586 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801601 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801613 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801699 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801725 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801764 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801765 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801796 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801826 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801892 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801901 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801919 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801948 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801972 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.802002 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.802029 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.803086 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.801917 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.802016 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.802022 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.802047 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.803017 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.803044 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.803251 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.803816 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.803285 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.803794 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.803344 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.803292 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.803469 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.803598 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.804217 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.802827 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.804325 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.804386 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.804442 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.804473 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.804506 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.804531 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.804559 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.804587 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.804598 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.804621 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.804652 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.804683 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.804702 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.804735 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.804759 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.804788 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.804819 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.804847 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.804877 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.804868 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.804915 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.804951 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.804950 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.804977 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.805009 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.805039 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.805069 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.805100 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.805135 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.805162 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.805194 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.805217 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.805231 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.805318 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.805358 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.805405 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.805446 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.805476 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.805508 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.805546 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.805580 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.805869 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.805888 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.805934 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.806148 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.806290 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.806345 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.806372 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.806517 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.806776 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.806801 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.807063 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.807074 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.807104 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.807384 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.807562 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.807610 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.807680 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.807805 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.807929 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.807970 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.808282 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.808508 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.808563 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.808894 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.809018 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.809231 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.809484 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.809528 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.809521 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.809574 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.809596 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.809601 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.809902 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.809893 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.809973 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810069 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810095 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810117 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810085 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810144 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810169 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810169 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810201 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810229 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810270 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810272 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810294 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810317 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810341 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810363 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810381 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810404 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810426 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810445 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810467 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810490 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810498 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810511 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810533 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810554 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810576 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810587 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810594 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.811025 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.811035 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.811238 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.811315 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.811346 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.810800 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.811452 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.811558 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.811575 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.811397 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.811818 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.812799 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.811556 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.812932 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.812976 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.813009 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.813038 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.813067 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.813098 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.813079 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.813124 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.813157 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.813190 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.813220 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.814444 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.814714 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.814714 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.814769 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.815115 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.815224 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.815243 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.815276 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.815354 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.815406 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.815242 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.815705 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.815818 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.815939 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.816059 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.816174 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.816292 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.815612 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.816427 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.815809 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.815877 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.815902 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.815962 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.816161 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.816255 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.816257 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.816398 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.816414 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.816560 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.816587 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.816592 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.816615 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.816641 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.816679 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.816702 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.816745 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.816766 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.816786 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.816803 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.816824 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.816848 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.816854 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.817102 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.817247 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.817272 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.817292 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.817316 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.817339 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.817350 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.817362 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.817479 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.817513 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.817513 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.817564 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.817592 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.817643 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.817700 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.817726 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.817771 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.817781 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.817798 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.817848 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.817876 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.817900 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.817949 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.817974 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818024 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818052 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818103 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818128 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818176 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818205 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818227 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818390 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818442 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818468 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818517 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818544 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818568 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818619 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818645 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818705 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818731 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818788 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818814 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818866 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818945 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818974 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819027 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819054 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819105 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819132 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819178 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819205 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819230 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819280 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819307 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819358 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819384 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819433 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819459 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819511 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819539 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819631 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819677 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819738 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819822 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819882 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819919 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819976 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820008 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820068 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820132 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820171 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820229 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820259 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820318 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820347 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820405 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820459 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820644 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820697 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820712 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820727 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820764 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820779 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820793 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820831 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820848 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820862 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820875 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820888 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820926 4823 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820942 4823 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820956 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820993 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821010 4823 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821023 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821037 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821051 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821090 4823 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821103 4823 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821115 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821128 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821168 4823 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821182 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821195 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821210 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821250 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821264 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821277 4823 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821290 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821327 4823 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821341 4823 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821353 4823 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821367 4823 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821407 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821421 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821436 4823 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821452 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821490 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821505 4823 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821517 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821530 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821568 4823 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821583 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821595 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821607 4823 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821619 4823 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821681 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821707 4823 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821720 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821733 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821770 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821785 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821798 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821811 4823 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821849 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821865 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821878 4823 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821890 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821902 4823 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821940 4823 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821954 4823 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821967 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821979 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822014 4823 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822028 4823 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822041 4823 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822053 4823 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822065 4823 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822105 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822117 4823 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822130 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822145 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822181 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822196 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822209 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822223 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822270 4823 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822285 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822300 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822312 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822347 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822363 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822375 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822388 4823 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822401 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822439 4823 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822453 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822465 4823 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822478 4823 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822517 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822532 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822546 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822558 4823 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822593 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822609 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822623 4823 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822637 4823 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822684 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822701 4823 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822716 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822728 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822741 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822780 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822793 4823 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.822806 4823 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.828991 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.829624 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.829743 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.831363 4823 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.817795 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.817806 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818049 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818126 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818308 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818388 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818553 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818904 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.818905 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819227 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819242 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819262 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819453 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819622 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.819999 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820367 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820419 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820593 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.820971 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.821248 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: E1206 06:25:17.822967 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.823103 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.824375 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.825631 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.825892 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.825968 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.826045 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.826210 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.826704 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.827025 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.827154 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.827260 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.827308 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.827308 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.827437 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.827752 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.828041 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.828055 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.828145 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: E1206 06:25:17.829461 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.829699 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.830294 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.830337 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.830813 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.834517 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.834805 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.834992 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.835393 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.836869 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.837458 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.838362 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.838617 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.839062 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.839221 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.839208 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.839658 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.839744 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: E1206 06:25:17.840279 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:25:18.340249831 +0000 UTC m=+19.626001791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:25:17 crc kubenswrapper[4823]: E1206 06:25:17.841142 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:18.341109706 +0000 UTC m=+19.626861756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:25:17 crc kubenswrapper[4823]: E1206 06:25:17.841195 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:18.341183808 +0000 UTC m=+19.626935778 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:25:17 crc kubenswrapper[4823]: E1206 06:25:17.847271 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:25:17 crc kubenswrapper[4823]: E1206 06:25:17.847298 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:25:17 crc kubenswrapper[4823]: E1206 06:25:17.847310 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:17 crc kubenswrapper[4823]: E1206 06:25:17.847379 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:18.347357197 +0000 UTC m=+19.633109157 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:17 crc kubenswrapper[4823]: E1206 06:25:17.850520 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:25:17 crc kubenswrapper[4823]: E1206 06:25:17.850540 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:25:17 crc kubenswrapper[4823]: E1206 06:25:17.850551 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:17 crc kubenswrapper[4823]: E1206 06:25:17.850594 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:18.35058187 +0000 UTC m=+19.636333830 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.852509 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.852738 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.852771 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.852755 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.854548 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.858003 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.858264 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.858410 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.858692 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.859086 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.859414 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.860227 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.860443 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.860689 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.861104 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.861204 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.861279 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.861893 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.861899 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.862004 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.862426 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.862618 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.862988 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.863044 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.863195 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.863271 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.863357 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.863383 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.863398 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.863472 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.865304 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.865927 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.870946 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.871309 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.874478 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.879339 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928229 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928307 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928388 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928407 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928422 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928436 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928451 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928463 4823 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928475 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928488 4823 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928504 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928521 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928535 4823 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928550 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928563 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928577 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928592 4823 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928605 4823 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928620 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928634 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928648 4823 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928684 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928698 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928711 4823 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928726 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928740 4823 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928753 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928765 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928778 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928790 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928802 4823 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928819 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928832 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928846 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928837 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928858 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928948 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928965 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928977 4823 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.928992 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929005 4823 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929018 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929031 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929043 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929056 4823 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929068 4823 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929084 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929096 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929109 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929121 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929134 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929146 4823 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929159 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929171 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929185 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929199 4823 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929212 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929223 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929236 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929249 4823 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929261 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929277 4823 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929288 4823 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929299 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929311 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929323 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929335 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929346 4823 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929356 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929370 4823 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929381 4823 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929394 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929424 4823 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929436 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929449 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929463 4823 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929473 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929486 4823 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929496 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929509 4823 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929520 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929531 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929542 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929553 4823 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929564 4823 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929575 4823 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929587 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929602 4823 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929615 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.929067 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 06:25:17 crc kubenswrapper[4823]: I1206 06:25:17.994397 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.007424 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.013519 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 06:25:18 crc kubenswrapper[4823]: W1206 06:25:18.035977 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-dc3eaca0d5bd64878bdda3259518f5f719605a69cf830a363eadc8a356e4ea10 WatchSource:0}: Error finding container dc3eaca0d5bd64878bdda3259518f5f719605a69cf830a363eadc8a356e4ea10: Status 404 returned error can't find the container with id dc3eaca0d5bd64878bdda3259518f5f719605a69cf830a363eadc8a356e4ea10 Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.102312 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.106445 4823 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.106557 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.108122 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.115057 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.115408 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.125720 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.137035 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.142137 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:18 crc kubenswrapper[4823]: E1206 06:25:18.142263 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.150525 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.160222 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.170914 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.190164 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.206900 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.216338 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.228342 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.240686 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.252802 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.257053 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2"} Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.257096 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7"} Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.257107 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3a7b7ad1b7077e09ff21e595ad67da93b3b7624bc3711d31b63ceb436354e2b7"} Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.259117 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6"} Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.259177 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f4cdc45b06c4f1a956eb7e5521206521bf7cfa223c42acad2906c467b1c07162"} Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.262505 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.264975 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a" exitCode=255 Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.265035 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a"} Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.265834 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.269248 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"dc3eaca0d5bd64878bdda3259518f5f719605a69cf830a363eadc8a356e4ea10"} Dec 06 06:25:18 crc kubenswrapper[4823]: E1206 06:25:18.272284 4823 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.272543 4823 scope.go:117] "RemoveContainer" containerID="a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.278538 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.292320 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.303998 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.315070 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.326421 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.337524 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.351801 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.432681 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.432794 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.432830 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:18 crc kubenswrapper[4823]: E1206 06:25:18.432886 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:25:19.432852574 +0000 UTC m=+20.718604534 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.432950 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:18 crc kubenswrapper[4823]: E1206 06:25:18.432974 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:25:18 crc kubenswrapper[4823]: E1206 06:25:18.433000 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:25:18 crc kubenswrapper[4823]: E1206 06:25:18.433015 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:18 crc kubenswrapper[4823]: E1206 06:25:18.433068 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:19.433051549 +0000 UTC m=+20.718803509 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:18 crc kubenswrapper[4823]: E1206 06:25:18.433100 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:25:18 crc kubenswrapper[4823]: E1206 06:25:18.433147 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:19.433137142 +0000 UTC m=+20.718889182 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:25:18 crc kubenswrapper[4823]: E1206 06:25:18.433219 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:25:18 crc kubenswrapper[4823]: E1206 06:25:18.433252 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:19.433243515 +0000 UTC m=+20.718995565 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.432999 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:18 crc kubenswrapper[4823]: E1206 06:25:18.433365 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:25:18 crc kubenswrapper[4823]: E1206 06:25:18.433385 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:25:18 crc kubenswrapper[4823]: E1206 06:25:18.433439 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:18 crc kubenswrapper[4823]: E1206 06:25:18.433472 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:19.433462291 +0000 UTC m=+20.719214331 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:18 crc kubenswrapper[4823]: I1206 06:25:18.940483 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.139939 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.140058 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:19 crc kubenswrapper[4823]: E1206 06:25:19.140199 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:25:19 crc kubenswrapper[4823]: E1206 06:25:19.140290 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.144810 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.145524 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.147524 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.148354 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.149492 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.150158 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.150911 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.153045 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.153764 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.154965 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.155540 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.156948 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.157567 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.158267 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.158306 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.159390 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.160004 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.161258 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.168577 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.169501 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.170905 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.171495 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.172211 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.173269 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.174068 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.175157 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.175907 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.177114 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.177580 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.178979 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.178977 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.179567 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.180061 4823 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.180170 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.182743 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.183286 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.184313 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.186288 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.187005 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.188158 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.188895 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.190017 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.190535 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.191495 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.192123 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.193268 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.193785 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.194802 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.195274 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.195996 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.197403 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.198030 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.199179 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.199970 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.200771 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.202537 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.203861 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.212550 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.238606 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.254867 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.270744 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.274391 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.276362 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af"} Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.277136 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.364520 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.380876 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.395239 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.410330 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.426077 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.441008 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.443263 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.443367 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.443395 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.443622 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:19 crc kubenswrapper[4823]: E1206 06:25:19.443703 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:25:19 crc kubenswrapper[4823]: E1206 06:25:19.443745 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:25:21.443706662 +0000 UTC m=+22.729458622 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:25:19 crc kubenswrapper[4823]: E1206 06:25:19.443798 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:21.443788475 +0000 UTC m=+22.729540435 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:25:19 crc kubenswrapper[4823]: E1206 06:25:19.443877 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:25:19 crc kubenswrapper[4823]: E1206 06:25:19.443911 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:25:19 crc kubenswrapper[4823]: E1206 06:25:19.443931 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.443927 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:19 crc kubenswrapper[4823]: E1206 06:25:19.443983 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:21.44397195 +0000 UTC m=+22.729723910 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:19 crc kubenswrapper[4823]: E1206 06:25:19.444120 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:25:19 crc kubenswrapper[4823]: E1206 06:25:19.444221 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:21.444202447 +0000 UTC m=+22.729954407 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:25:19 crc kubenswrapper[4823]: E1206 06:25:19.444411 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:25:19 crc kubenswrapper[4823]: E1206 06:25:19.444544 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:25:19 crc kubenswrapper[4823]: E1206 06:25:19.444638 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:19 crc kubenswrapper[4823]: E1206 06:25:19.444788 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:21.444759223 +0000 UTC m=+22.730511183 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:19 crc kubenswrapper[4823]: I1206 06:25:19.455646 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:20 crc kubenswrapper[4823]: I1206 06:25:20.140747 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:20 crc kubenswrapper[4823]: E1206 06:25:20.141214 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:25:20 crc kubenswrapper[4823]: I1206 06:25:20.280306 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f"} Dec 06 06:25:20 crc kubenswrapper[4823]: I1206 06:25:20.296213 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:20 crc kubenswrapper[4823]: I1206 06:25:20.311122 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:20 crc kubenswrapper[4823]: I1206 06:25:20.322374 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:20 crc kubenswrapper[4823]: I1206 06:25:20.332798 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:20 crc kubenswrapper[4823]: I1206 06:25:20.345146 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:20 crc kubenswrapper[4823]: I1206 06:25:20.362082 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:20 crc kubenswrapper[4823]: I1206 06:25:20.379947 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:20 crc kubenswrapper[4823]: I1206 06:25:20.912163 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:25:20 crc kubenswrapper[4823]: I1206 06:25:20.918073 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:25:20 crc kubenswrapper[4823]: I1206 06:25:20.923293 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 06 06:25:20 crc kubenswrapper[4823]: I1206 06:25:20.930570 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:20 crc kubenswrapper[4823]: I1206 06:25:20.948558 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:20 crc kubenswrapper[4823]: I1206 06:25:20.966395 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:20 crc kubenswrapper[4823]: I1206 06:25:20.981141 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:20 crc kubenswrapper[4823]: I1206 06:25:20.994005 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.005968 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.019048 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.032584 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.044600 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.058542 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.072265 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.085746 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.100232 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.114424 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.129346 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.140947 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.140983 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:21 crc kubenswrapper[4823]: E1206 06:25:21.141128 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:25:21 crc kubenswrapper[4823]: E1206 06:25:21.141289 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.465990 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.466082 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:21 crc kubenswrapper[4823]: E1206 06:25:21.466114 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:25:25.466090402 +0000 UTC m=+26.751842362 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.466146 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.466175 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.466203 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:21 crc kubenswrapper[4823]: E1206 06:25:21.466205 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:25:21 crc kubenswrapper[4823]: E1206 06:25:21.466246 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:25:21 crc kubenswrapper[4823]: E1206 06:25:21.466257 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:21 crc kubenswrapper[4823]: E1206 06:25:21.466258 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:25:21 crc kubenswrapper[4823]: E1206 06:25:21.466272 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:25:21 crc kubenswrapper[4823]: E1206 06:25:21.466288 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:25.466281687 +0000 UTC m=+26.752033647 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:21 crc kubenswrapper[4823]: E1206 06:25:21.466206 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:25:21 crc kubenswrapper[4823]: E1206 06:25:21.466293 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:25:21 crc kubenswrapper[4823]: E1206 06:25:21.466309 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:21 crc kubenswrapper[4823]: E1206 06:25:21.466299 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:25.466293988 +0000 UTC m=+26.752045938 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:25:21 crc kubenswrapper[4823]: E1206 06:25:21.466354 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:25.466341649 +0000 UTC m=+26.752093609 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:25:21 crc kubenswrapper[4823]: E1206 06:25:21.466367 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:25.46636038 +0000 UTC m=+26.752112340 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.936812 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.938484 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.938515 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.938523 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.938584 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.948464 4823 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.948800 4823 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.949955 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.949987 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.949998 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.950015 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.950026 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:21Z","lastTransitionTime":"2025-12-06T06:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:21 crc kubenswrapper[4823]: E1206 06:25:21.970484 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.975192 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.975240 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.975250 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.975265 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.975276 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:21Z","lastTransitionTime":"2025-12-06T06:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:21 crc kubenswrapper[4823]: E1206 06:25:21.988247 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.993068 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.993139 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.993161 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.993179 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:21 crc kubenswrapper[4823]: I1206 06:25:21.993193 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:21Z","lastTransitionTime":"2025-12-06T06:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:22 crc kubenswrapper[4823]: E1206 06:25:22.007340 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:22Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.012478 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.012556 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.012571 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.012595 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.012611 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:22Z","lastTransitionTime":"2025-12-06T06:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:22 crc kubenswrapper[4823]: E1206 06:25:22.025587 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:22Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.029593 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.029652 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.029686 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.029713 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.029725 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:22Z","lastTransitionTime":"2025-12-06T06:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:22 crc kubenswrapper[4823]: E1206 06:25:22.044839 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:22Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:22 crc kubenswrapper[4823]: E1206 06:25:22.045311 4823 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.047316 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.047357 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.047375 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.047394 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.047406 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:22Z","lastTransitionTime":"2025-12-06T06:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.139741 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:22 crc kubenswrapper[4823]: E1206 06:25:22.139876 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.149785 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.150493 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.150568 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.150647 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.150770 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:22Z","lastTransitionTime":"2025-12-06T06:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.252538 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.252842 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.252987 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.253136 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.253221 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:22Z","lastTransitionTime":"2025-12-06T06:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.356741 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.356784 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.356795 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.356812 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.356821 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:22Z","lastTransitionTime":"2025-12-06T06:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.460577 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.460632 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.460646 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.460692 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.460709 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:22Z","lastTransitionTime":"2025-12-06T06:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.562461 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.562837 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.562971 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.563066 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.563162 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:22Z","lastTransitionTime":"2025-12-06T06:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.665537 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.665822 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.665917 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.666012 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.666076 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:22Z","lastTransitionTime":"2025-12-06T06:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.768789 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.769090 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.769170 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.769263 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.769344 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:22Z","lastTransitionTime":"2025-12-06T06:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.871885 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.871927 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.871937 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.871952 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.871964 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:22Z","lastTransitionTime":"2025-12-06T06:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.974100 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.974490 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.974586 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.974701 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:22 crc kubenswrapper[4823]: I1206 06:25:22.974782 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:22Z","lastTransitionTime":"2025-12-06T06:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.043541 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-mv8th"] Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.043903 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mv8th" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.047579 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-4h4hh"] Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.048270 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4h4hh" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.051558 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.051579 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.052126 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.052336 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.052407 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.052447 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.054539 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.077144 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.077179 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.077189 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.077205 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.077217 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:23Z","lastTransitionTime":"2025-12-06T06:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.098723 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.129915 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.139761 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.139765 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:23 crc kubenswrapper[4823]: E1206 06:25:23.139921 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:25:23 crc kubenswrapper[4823]: E1206 06:25:23.140007 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.166300 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.179837 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.179879 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.179889 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.179905 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.179916 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:23Z","lastTransitionTime":"2025-12-06T06:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.181166 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f42caff9-cbd1-4b1f-91ca-51651adc4a2a-hosts-file\") pod \"node-resolver-mv8th\" (UID: \"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\") " pod="openshift-dns/node-resolver-mv8th" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.181197 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/026a8135-2818-40fa-b269-4ea047404758-serviceca\") pod \"node-ca-4h4hh\" (UID: \"026a8135-2818-40fa-b269-4ea047404758\") " pod="openshift-image-registry/node-ca-4h4hh" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.181221 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjg2d\" (UniqueName: \"kubernetes.io/projected/f42caff9-cbd1-4b1f-91ca-51651adc4a2a-kube-api-access-cjg2d\") pod \"node-resolver-mv8th\" (UID: \"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\") " pod="openshift-dns/node-resolver-mv8th" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.181242 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/026a8135-2818-40fa-b269-4ea047404758-host\") pod \"node-ca-4h4hh\" (UID: \"026a8135-2818-40fa-b269-4ea047404758\") " pod="openshift-image-registry/node-ca-4h4hh" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.181270 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl5g9\" (UniqueName: \"kubernetes.io/projected/026a8135-2818-40fa-b269-4ea047404758-kube-api-access-vl5g9\") pod \"node-ca-4h4hh\" (UID: \"026a8135-2818-40fa-b269-4ea047404758\") " pod="openshift-image-registry/node-ca-4h4hh" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.186981 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.201131 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.215467 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.232805 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.254236 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.267628 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.281544 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f42caff9-cbd1-4b1f-91ca-51651adc4a2a-hosts-file\") pod \"node-resolver-mv8th\" (UID: \"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\") " pod="openshift-dns/node-resolver-mv8th" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.281584 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/026a8135-2818-40fa-b269-4ea047404758-serviceca\") pod \"node-ca-4h4hh\" (UID: \"026a8135-2818-40fa-b269-4ea047404758\") " pod="openshift-image-registry/node-ca-4h4hh" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.281608 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjg2d\" (UniqueName: \"kubernetes.io/projected/f42caff9-cbd1-4b1f-91ca-51651adc4a2a-kube-api-access-cjg2d\") pod \"node-resolver-mv8th\" (UID: \"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\") " pod="openshift-dns/node-resolver-mv8th" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.281630 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/026a8135-2818-40fa-b269-4ea047404758-host\") pod \"node-ca-4h4hh\" (UID: \"026a8135-2818-40fa-b269-4ea047404758\") " pod="openshift-image-registry/node-ca-4h4hh" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.281679 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl5g9\" (UniqueName: \"kubernetes.io/projected/026a8135-2818-40fa-b269-4ea047404758-kube-api-access-vl5g9\") pod \"node-ca-4h4hh\" (UID: \"026a8135-2818-40fa-b269-4ea047404758\") " pod="openshift-image-registry/node-ca-4h4hh" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.281681 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f42caff9-cbd1-4b1f-91ca-51651adc4a2a-hosts-file\") pod \"node-resolver-mv8th\" (UID: \"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\") " pod="openshift-dns/node-resolver-mv8th" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.281881 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/026a8135-2818-40fa-b269-4ea047404758-host\") pod \"node-ca-4h4hh\" (UID: \"026a8135-2818-40fa-b269-4ea047404758\") " pod="openshift-image-registry/node-ca-4h4hh" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.282179 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.282201 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.282212 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.282227 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.282237 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:23Z","lastTransitionTime":"2025-12-06T06:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.283920 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/026a8135-2818-40fa-b269-4ea047404758-serviceca\") pod \"node-ca-4h4hh\" (UID: \"026a8135-2818-40fa-b269-4ea047404758\") " pod="openshift-image-registry/node-ca-4h4hh" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.300098 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.303232 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl5g9\" (UniqueName: \"kubernetes.io/projected/026a8135-2818-40fa-b269-4ea047404758-kube-api-access-vl5g9\") pod \"node-ca-4h4hh\" (UID: \"026a8135-2818-40fa-b269-4ea047404758\") " pod="openshift-image-registry/node-ca-4h4hh" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.312287 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjg2d\" (UniqueName: \"kubernetes.io/projected/f42caff9-cbd1-4b1f-91ca-51651adc4a2a-kube-api-access-cjg2d\") pod \"node-resolver-mv8th\" (UID: \"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\") " pod="openshift-dns/node-resolver-mv8th" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.319597 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.343599 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.361484 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.361963 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mv8th" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.373227 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4h4hh" Dec 06 06:25:23 crc kubenswrapper[4823]: W1206 06:25:23.374983 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf42caff9_cbd1_4b1f_91ca_51651adc4a2a.slice/crio-4adfd757d98198142dec87a355266d45be4ff0b4bc3be022af36f9d933fb45f0 WatchSource:0}: Error finding container 4adfd757d98198142dec87a355266d45be4ff0b4bc3be022af36f9d933fb45f0: Status 404 returned error can't find the container with id 4adfd757d98198142dec87a355266d45be4ff0b4bc3be022af36f9d933fb45f0 Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.385214 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.385254 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.385275 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.385292 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.385302 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:23Z","lastTransitionTime":"2025-12-06T06:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.386801 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.408442 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.421266 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.437442 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.458188 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.471695 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.495199 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.495237 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.495248 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.495265 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.495276 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:23Z","lastTransitionTime":"2025-12-06T06:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.597582 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.597625 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.597634 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.597651 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.597676 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:23Z","lastTransitionTime":"2025-12-06T06:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.700635 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.700724 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.700744 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.700769 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.700783 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:23Z","lastTransitionTime":"2025-12-06T06:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.804330 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.804406 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.804422 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.804447 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.804460 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:23Z","lastTransitionTime":"2025-12-06T06:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.890347 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-bldh8"] Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.891567 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-7wlj2"] Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.892163 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-95qxf"] Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.892952 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bldh8" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.893056 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.894857 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-95qxf" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.897479 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.897643 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.897967 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.898099 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.898131 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.898171 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.898116 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.898252 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.898313 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.898349 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.898511 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.898584 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.907624 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.907707 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.907722 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.907742 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.907756 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:23Z","lastTransitionTime":"2025-12-06T06:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.914389 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.935288 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.953201 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.969463 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.986042 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.988320 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-host-run-netns\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.988386 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e2faf943-388e-4105-a30d-b0bbb041f8e0-cni-binary-copy\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.988409 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-multus-socket-dir-parent\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.988462 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-host-var-lib-cni-multus\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.988521 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/69d0518f-7105-49e1-b537-f4de7b8f9a14-rootfs\") pod \"machine-config-daemon-7wlj2\" (UID: \"69d0518f-7105-49e1-b537-f4de7b8f9a14\") " pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.988549 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-hostroot\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.988590 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-host-var-lib-cni-bin\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.988609 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/22972156-629d-4bc6-8108-9f50b7416afc-cnibin\") pod \"multus-additional-cni-plugins-95qxf\" (UID: \"22972156-629d-4bc6-8108-9f50b7416afc\") " pod="openshift-multus/multus-additional-cni-plugins-95qxf" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.988628 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-host-run-k8s-cni-cncf-io\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.988676 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e2faf943-388e-4105-a30d-b0bbb041f8e0-multus-daemon-config\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.988713 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/22972156-629d-4bc6-8108-9f50b7416afc-system-cni-dir\") pod \"multus-additional-cni-plugins-95qxf\" (UID: \"22972156-629d-4bc6-8108-9f50b7416afc\") " pod="openshift-multus/multus-additional-cni-plugins-95qxf" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.988744 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/22972156-629d-4bc6-8108-9f50b7416afc-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-95qxf\" (UID: \"22972156-629d-4bc6-8108-9f50b7416afc\") " pod="openshift-multus/multus-additional-cni-plugins-95qxf" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.988772 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-host-run-multus-certs\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.988801 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/69d0518f-7105-49e1-b537-f4de7b8f9a14-proxy-tls\") pod \"machine-config-daemon-7wlj2\" (UID: \"69d0518f-7105-49e1-b537-f4de7b8f9a14\") " pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.988823 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbscc\" (UniqueName: \"kubernetes.io/projected/69d0518f-7105-49e1-b537-f4de7b8f9a14-kube-api-access-kbscc\") pod \"machine-config-daemon-7wlj2\" (UID: \"69d0518f-7105-49e1-b537-f4de7b8f9a14\") " pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.988847 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/69d0518f-7105-49e1-b537-f4de7b8f9a14-mcd-auth-proxy-config\") pod \"machine-config-daemon-7wlj2\" (UID: \"69d0518f-7105-49e1-b537-f4de7b8f9a14\") " pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.988873 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/22972156-629d-4bc6-8108-9f50b7416afc-tuning-conf-dir\") pod \"multus-additional-cni-plugins-95qxf\" (UID: \"22972156-629d-4bc6-8108-9f50b7416afc\") " pod="openshift-multus/multus-additional-cni-plugins-95qxf" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.988895 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-etc-kubernetes\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.988918 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w696\" (UniqueName: \"kubernetes.io/projected/e2faf943-388e-4105-a30d-b0bbb041f8e0-kube-api-access-2w696\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.988947 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-host-var-lib-kubelet\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.988968 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-multus-conf-dir\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.988991 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-multus-cni-dir\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.989024 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-cnibin\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.989058 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/22972156-629d-4bc6-8108-9f50b7416afc-os-release\") pod \"multus-additional-cni-plugins-95qxf\" (UID: \"22972156-629d-4bc6-8108-9f50b7416afc\") " pod="openshift-multus/multus-additional-cni-plugins-95qxf" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.989084 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/22972156-629d-4bc6-8108-9f50b7416afc-cni-binary-copy\") pod \"multus-additional-cni-plugins-95qxf\" (UID: \"22972156-629d-4bc6-8108-9f50b7416afc\") " pod="openshift-multus/multus-additional-cni-plugins-95qxf" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.989130 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-os-release\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.989158 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhjzl\" (UniqueName: \"kubernetes.io/projected/22972156-629d-4bc6-8108-9f50b7416afc-kube-api-access-hhjzl\") pod \"multus-additional-cni-plugins-95qxf\" (UID: \"22972156-629d-4bc6-8108-9f50b7416afc\") " pod="openshift-multus/multus-additional-cni-plugins-95qxf" Dec 06 06:25:23 crc kubenswrapper[4823]: I1206 06:25:23.989185 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-system-cni-dir\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.006142 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.010371 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.010423 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.010432 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.010448 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.010459 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:24Z","lastTransitionTime":"2025-12-06T06:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.019894 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.035834 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.052193 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.067359 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.090360 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhjzl\" (UniqueName: \"kubernetes.io/projected/22972156-629d-4bc6-8108-9f50b7416afc-kube-api-access-hhjzl\") pod \"multus-additional-cni-plugins-95qxf\" (UID: \"22972156-629d-4bc6-8108-9f50b7416afc\") " pod="openshift-multus/multus-additional-cni-plugins-95qxf" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.090466 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-system-cni-dir\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.090498 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-multus-socket-dir-parent\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.090524 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-host-run-netns\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.090575 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e2faf943-388e-4105-a30d-b0bbb041f8e0-cni-binary-copy\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.090605 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-host-var-lib-cni-multus\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.090629 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/69d0518f-7105-49e1-b537-f4de7b8f9a14-rootfs\") pod \"machine-config-daemon-7wlj2\" (UID: \"69d0518f-7105-49e1-b537-f4de7b8f9a14\") " pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.090695 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-hostroot\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.090736 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-host-var-lib-cni-bin\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.090726 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-system-cni-dir\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.090760 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/22972156-629d-4bc6-8108-9f50b7416afc-cnibin\") pod \"multus-additional-cni-plugins-95qxf\" (UID: \"22972156-629d-4bc6-8108-9f50b7416afc\") " pod="openshift-multus/multus-additional-cni-plugins-95qxf" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.090789 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-host-run-k8s-cni-cncf-io\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.090796 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-hostroot\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.090775 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-host-run-netns\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.090818 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e2faf943-388e-4105-a30d-b0bbb041f8e0-multus-daemon-config\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.090756 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-host-var-lib-cni-multus\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.090882 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-host-var-lib-cni-bin\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.090869 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/69d0518f-7105-49e1-b537-f4de7b8f9a14-rootfs\") pod \"machine-config-daemon-7wlj2\" (UID: \"69d0518f-7105-49e1-b537-f4de7b8f9a14\") " pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.090839 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/22972156-629d-4bc6-8108-9f50b7416afc-cnibin\") pod \"multus-additional-cni-plugins-95qxf\" (UID: \"22972156-629d-4bc6-8108-9f50b7416afc\") " pod="openshift-multus/multus-additional-cni-plugins-95qxf" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.090911 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-multus-socket-dir-parent\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.090955 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-host-run-k8s-cni-cncf-io\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.090966 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/22972156-629d-4bc6-8108-9f50b7416afc-system-cni-dir\") pod \"multus-additional-cni-plugins-95qxf\" (UID: \"22972156-629d-4bc6-8108-9f50b7416afc\") " pod="openshift-multus/multus-additional-cni-plugins-95qxf" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.090996 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/22972156-629d-4bc6-8108-9f50b7416afc-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-95qxf\" (UID: \"22972156-629d-4bc6-8108-9f50b7416afc\") " pod="openshift-multus/multus-additional-cni-plugins-95qxf" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.091015 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/22972156-629d-4bc6-8108-9f50b7416afc-system-cni-dir\") pod \"multus-additional-cni-plugins-95qxf\" (UID: \"22972156-629d-4bc6-8108-9f50b7416afc\") " pod="openshift-multus/multus-additional-cni-plugins-95qxf" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.091056 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-host-run-multus-certs\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.091026 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-host-run-multus-certs\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.091118 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/69d0518f-7105-49e1-b537-f4de7b8f9a14-proxy-tls\") pod \"machine-config-daemon-7wlj2\" (UID: \"69d0518f-7105-49e1-b537-f4de7b8f9a14\") " pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.091151 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbscc\" (UniqueName: \"kubernetes.io/projected/69d0518f-7105-49e1-b537-f4de7b8f9a14-kube-api-access-kbscc\") pod \"machine-config-daemon-7wlj2\" (UID: \"69d0518f-7105-49e1-b537-f4de7b8f9a14\") " pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.091192 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/69d0518f-7105-49e1-b537-f4de7b8f9a14-mcd-auth-proxy-config\") pod \"machine-config-daemon-7wlj2\" (UID: \"69d0518f-7105-49e1-b537-f4de7b8f9a14\") " pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.091221 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/22972156-629d-4bc6-8108-9f50b7416afc-tuning-conf-dir\") pod \"multus-additional-cni-plugins-95qxf\" (UID: \"22972156-629d-4bc6-8108-9f50b7416afc\") " pod="openshift-multus/multus-additional-cni-plugins-95qxf" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.091251 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-etc-kubernetes\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.091278 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w696\" (UniqueName: \"kubernetes.io/projected/e2faf943-388e-4105-a30d-b0bbb041f8e0-kube-api-access-2w696\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.091307 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-host-var-lib-kubelet\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.091334 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-multus-conf-dir\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.091362 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-multus-cni-dir\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.091391 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-cnibin\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.091417 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/22972156-629d-4bc6-8108-9f50b7416afc-os-release\") pod \"multus-additional-cni-plugins-95qxf\" (UID: \"22972156-629d-4bc6-8108-9f50b7416afc\") " pod="openshift-multus/multus-additional-cni-plugins-95qxf" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.091439 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/22972156-629d-4bc6-8108-9f50b7416afc-cni-binary-copy\") pod \"multus-additional-cni-plugins-95qxf\" (UID: \"22972156-629d-4bc6-8108-9f50b7416afc\") " pod="openshift-multus/multus-additional-cni-plugins-95qxf" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.091464 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-os-release\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.091571 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-os-release\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.091590 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-host-var-lib-kubelet\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.091618 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-multus-conf-dir\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.091682 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e2faf943-388e-4105-a30d-b0bbb041f8e0-cni-binary-copy\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.091706 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-multus-cni-dir\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.091709 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e2faf943-388e-4105-a30d-b0bbb041f8e0-multus-daemon-config\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.091755 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-cnibin\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.091788 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e2faf943-388e-4105-a30d-b0bbb041f8e0-etc-kubernetes\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.091802 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/22972156-629d-4bc6-8108-9f50b7416afc-os-release\") pod \"multus-additional-cni-plugins-95qxf\" (UID: \"22972156-629d-4bc6-8108-9f50b7416afc\") " pod="openshift-multus/multus-additional-cni-plugins-95qxf" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.091922 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/22972156-629d-4bc6-8108-9f50b7416afc-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-95qxf\" (UID: \"22972156-629d-4bc6-8108-9f50b7416afc\") " pod="openshift-multus/multus-additional-cni-plugins-95qxf" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.092171 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/22972156-629d-4bc6-8108-9f50b7416afc-tuning-conf-dir\") pod \"multus-additional-cni-plugins-95qxf\" (UID: \"22972156-629d-4bc6-8108-9f50b7416afc\") " pod="openshift-multus/multus-additional-cni-plugins-95qxf" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.092324 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/69d0518f-7105-49e1-b537-f4de7b8f9a14-mcd-auth-proxy-config\") pod \"machine-config-daemon-7wlj2\" (UID: \"69d0518f-7105-49e1-b537-f4de7b8f9a14\") " pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.092341 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/22972156-629d-4bc6-8108-9f50b7416afc-cni-binary-copy\") pod \"multus-additional-cni-plugins-95qxf\" (UID: \"22972156-629d-4bc6-8108-9f50b7416afc\") " pod="openshift-multus/multus-additional-cni-plugins-95qxf" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.094967 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.097153 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/69d0518f-7105-49e1-b537-f4de7b8f9a14-proxy-tls\") pod \"machine-config-daemon-7wlj2\" (UID: \"69d0518f-7105-49e1-b537-f4de7b8f9a14\") " pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.111099 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhjzl\" (UniqueName: \"kubernetes.io/projected/22972156-629d-4bc6-8108-9f50b7416afc-kube-api-access-hhjzl\") pod \"multus-additional-cni-plugins-95qxf\" (UID: \"22972156-629d-4bc6-8108-9f50b7416afc\") " pod="openshift-multus/multus-additional-cni-plugins-95qxf" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.112382 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.113564 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.113622 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.113632 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.113681 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.113702 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:24Z","lastTransitionTime":"2025-12-06T06:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.117697 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w696\" (UniqueName: \"kubernetes.io/projected/e2faf943-388e-4105-a30d-b0bbb041f8e0-kube-api-access-2w696\") pod \"multus-bldh8\" (UID: \"e2faf943-388e-4105-a30d-b0bbb041f8e0\") " pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.118155 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbscc\" (UniqueName: \"kubernetes.io/projected/69d0518f-7105-49e1-b537-f4de7b8f9a14-kube-api-access-kbscc\") pod \"machine-config-daemon-7wlj2\" (UID: \"69d0518f-7105-49e1-b537-f4de7b8f9a14\") " pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.126865 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.140565 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:24 crc kubenswrapper[4823]: E1206 06:25:24.140728 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.148895 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.165186 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.184279 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.200618 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.210054 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bldh8" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.213496 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.216150 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.216198 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.216209 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.216227 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.216239 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:24Z","lastTransitionTime":"2025-12-06T06:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.220279 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 06:25:24 crc kubenswrapper[4823]: W1206 06:25:24.220499 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2faf943_388e_4105_a30d_b0bbb041f8e0.slice/crio-ab64774a911cb7a3057edb3db41a1e453674a098c0e917c24462f419d997b0f4 WatchSource:0}: Error finding container ab64774a911cb7a3057edb3db41a1e453674a098c0e917c24462f419d997b0f4: Status 404 returned error can't find the container with id ab64774a911cb7a3057edb3db41a1e453674a098c0e917c24462f419d997b0f4 Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.227395 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-95qxf" Dec 06 06:25:24 crc kubenswrapper[4823]: W1206 06:25:24.231883 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69d0518f_7105_49e1_b537_f4de7b8f9a14.slice/crio-7aa9d31560bbfd9af15e293d59758b4f85288c8d33dc104a9f65a33af5d4820c WatchSource:0}: Error finding container 7aa9d31560bbfd9af15e293d59758b4f85288c8d33dc104a9f65a33af5d4820c: Status 404 returned error can't find the container with id 7aa9d31560bbfd9af15e293d59758b4f85288c8d33dc104a9f65a33af5d4820c Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.233741 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.254535 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.271333 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.277301 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rr4m5"] Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.278479 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.282239 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.282442 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.282560 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.282718 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.282879 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.283030 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.283156 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.293041 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.295049 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" event={"ID":"22972156-629d-4bc6-8108-9f50b7416afc","Type":"ContainerStarted","Data":"f099cd6a0851d8d578bf43932eccc4de571be2c30d610eff0d1f4ed1d9836e63"} Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.300103 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mv8th" event={"ID":"f42caff9-cbd1-4b1f-91ca-51651adc4a2a","Type":"ContainerStarted","Data":"9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8"} Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.300158 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mv8th" event={"ID":"f42caff9-cbd1-4b1f-91ca-51651adc4a2a","Type":"ContainerStarted","Data":"4adfd757d98198142dec87a355266d45be4ff0b4bc3be022af36f9d933fb45f0"} Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.301650 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bldh8" event={"ID":"e2faf943-388e-4105-a30d-b0bbb041f8e0","Type":"ContainerStarted","Data":"ab64774a911cb7a3057edb3db41a1e453674a098c0e917c24462f419d997b0f4"} Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.302789 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4h4hh" event={"ID":"026a8135-2818-40fa-b269-4ea047404758","Type":"ContainerStarted","Data":"340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671"} Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.302817 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4h4hh" event={"ID":"026a8135-2818-40fa-b269-4ea047404758","Type":"ContainerStarted","Data":"e6620a4ac4c8d35f81d0bc3e53d379c12d0be6d3a4e770dd97d6c9d63174c5f9"} Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.303878 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerStarted","Data":"7aa9d31560bbfd9af15e293d59758b4f85288c8d33dc104a9f65a33af5d4820c"} Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.307624 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.323193 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.331697 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.331762 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.331773 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.331791 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.331822 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:24Z","lastTransitionTime":"2025-12-06T06:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.335929 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.348042 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.361879 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.376997 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.389091 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.394015 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-kubelet\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.394063 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-cni-netd\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.394085 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d7a8c395-bca0-48a5-bb35-10e956e85a2a-ovnkube-script-lib\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.394106 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-run-netns\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.394128 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-etc-openvswitch\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.394147 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-run-openvswitch\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.394165 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-node-log\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.394188 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-log-socket\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.394216 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-run-systemd\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.394233 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-var-lib-openvswitch\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.394252 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.394272 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d7a8c395-bca0-48a5-bb35-10e956e85a2a-ovnkube-config\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.394289 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d7a8c395-bca0-48a5-bb35-10e956e85a2a-env-overrides\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.394321 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-cni-bin\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.394340 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-slash\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.394382 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-run-ovn\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.394402 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d7a8c395-bca0-48a5-bb35-10e956e85a2a-ovn-node-metrics-cert\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.394423 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnbgp\" (UniqueName: \"kubernetes.io/projected/d7a8c395-bca0-48a5-bb35-10e956e85a2a-kube-api-access-qnbgp\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.394444 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-systemd-units\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.394464 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-run-ovn-kubernetes\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.404471 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.429030 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.435029 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.435083 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.435099 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.435125 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.435143 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:24Z","lastTransitionTime":"2025-12-06T06:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.445871 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.463239 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.478113 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.493028 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.495638 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-kubelet\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.495714 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-cni-netd\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.495741 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d7a8c395-bca0-48a5-bb35-10e956e85a2a-ovnkube-script-lib\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.495766 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-run-netns\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.495771 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-kubelet\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.495835 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-cni-netd\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.495853 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-etc-openvswitch\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.495789 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-etc-openvswitch\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.495904 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-run-netns\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.495907 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-run-openvswitch\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.495934 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-node-log\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.495967 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-log-socket\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.495993 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-run-systemd\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.496002 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-run-openvswitch\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.496014 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-var-lib-openvswitch\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.496040 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-log-socket\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.496042 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.496071 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d7a8c395-bca0-48a5-bb35-10e956e85a2a-ovnkube-config\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.496094 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d7a8c395-bca0-48a5-bb35-10e956e85a2a-env-overrides\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.496122 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-cni-bin\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.496145 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-slash\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.496147 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.496213 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-run-ovn\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.496239 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d7a8c395-bca0-48a5-bb35-10e956e85a2a-ovn-node-metrics-cert\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.496366 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnbgp\" (UniqueName: \"kubernetes.io/projected/d7a8c395-bca0-48a5-bb35-10e956e85a2a-kube-api-access-qnbgp\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.496392 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-systemd-units\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.496414 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-run-ovn-kubernetes\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.496477 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-run-ovn-kubernetes\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.496073 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-node-log\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.496096 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-var-lib-openvswitch\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.496699 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d7a8c395-bca0-48a5-bb35-10e956e85a2a-ovnkube-script-lib\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.496124 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-run-systemd\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.496709 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-cni-bin\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.496752 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-slash\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.496766 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-run-ovn\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.496782 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-systemd-units\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.497118 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d7a8c395-bca0-48a5-bb35-10e956e85a2a-env-overrides\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.497354 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d7a8c395-bca0-48a5-bb35-10e956e85a2a-ovnkube-config\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.506700 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d7a8c395-bca0-48a5-bb35-10e956e85a2a-ovn-node-metrics-cert\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.506843 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.515345 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnbgp\" (UniqueName: \"kubernetes.io/projected/d7a8c395-bca0-48a5-bb35-10e956e85a2a-kube-api-access-qnbgp\") pod \"ovnkube-node-rr4m5\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.521583 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.534068 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:24Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.540257 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.540307 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.540318 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.540394 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.540408 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:24Z","lastTransitionTime":"2025-12-06T06:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.642818 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.642862 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.642873 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.642890 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.642902 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:24Z","lastTransitionTime":"2025-12-06T06:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.703380 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.744798 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.744840 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.744852 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.744871 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.744884 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:24Z","lastTransitionTime":"2025-12-06T06:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.847428 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.847485 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.847562 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.847582 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.847616 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:24Z","lastTransitionTime":"2025-12-06T06:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.952962 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.953314 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.953325 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.953343 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:24 crc kubenswrapper[4823]: I1206 06:25:24.953356 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:24Z","lastTransitionTime":"2025-12-06T06:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.056138 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.056215 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.056229 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.056247 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.056262 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:25Z","lastTransitionTime":"2025-12-06T06:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.140868 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.140868 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:25 crc kubenswrapper[4823]: E1206 06:25:25.141037 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:25:25 crc kubenswrapper[4823]: E1206 06:25:25.141112 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.159010 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.159050 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.159061 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.159077 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.159088 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:25Z","lastTransitionTime":"2025-12-06T06:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.262333 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.262377 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.262386 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.262403 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.262413 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:25Z","lastTransitionTime":"2025-12-06T06:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.309049 4823 generic.go:334] "Generic (PLEG): container finished" podID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerID="baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4" exitCode=0 Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.309133 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" event={"ID":"d7a8c395-bca0-48a5-bb35-10e956e85a2a","Type":"ContainerDied","Data":"baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4"} Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.309192 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" event={"ID":"d7a8c395-bca0-48a5-bb35-10e956e85a2a","Type":"ContainerStarted","Data":"77bffb303293e67375fb94850372985bda20ea557ef14205104486a3fca9e076"} Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.313322 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerStarted","Data":"b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40"} Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.313379 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerStarted","Data":"4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b"} Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.315338 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bldh8" event={"ID":"e2faf943-388e-4105-a30d-b0bbb041f8e0","Type":"ContainerStarted","Data":"df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7"} Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.317861 4823 generic.go:334] "Generic (PLEG): container finished" podID="22972156-629d-4bc6-8108-9f50b7416afc" containerID="78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966" exitCode=0 Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.317918 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" event={"ID":"22972156-629d-4bc6-8108-9f50b7416afc","Type":"ContainerDied","Data":"78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966"} Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.331223 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.355898 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.369367 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.369416 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.369428 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.369447 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.369463 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:25Z","lastTransitionTime":"2025-12-06T06:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.378102 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.392565 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.411981 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.431180 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.451981 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.468539 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.472395 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.472457 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.472469 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.472492 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.472508 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:25Z","lastTransitionTime":"2025-12-06T06:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.484090 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.498439 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.508441 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.508557 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.508576 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.508598 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.508615 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:25 crc kubenswrapper[4823]: E1206 06:25:25.508737 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:25:25 crc kubenswrapper[4823]: E1206 06:25:25.508752 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:25:25 crc kubenswrapper[4823]: E1206 06:25:25.508762 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:25 crc kubenswrapper[4823]: E1206 06:25:25.508799 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:33.508786971 +0000 UTC m=+34.794538931 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:25 crc kubenswrapper[4823]: E1206 06:25:25.509048 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:25:25 crc kubenswrapper[4823]: E1206 06:25:25.509064 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:25:25 crc kubenswrapper[4823]: E1206 06:25:25.509071 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:25 crc kubenswrapper[4823]: E1206 06:25:25.509093 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:33.509086469 +0000 UTC m=+34.794838429 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:25 crc kubenswrapper[4823]: E1206 06:25:25.509185 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:25:33.509155721 +0000 UTC m=+34.794907681 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:25:25 crc kubenswrapper[4823]: E1206 06:25:25.509188 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:25:25 crc kubenswrapper[4823]: E1206 06:25:25.509252 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:33.509242044 +0000 UTC m=+34.794993994 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:25:25 crc kubenswrapper[4823]: E1206 06:25:25.509188 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:25:25 crc kubenswrapper[4823]: E1206 06:25:25.509297 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:33.509289975 +0000 UTC m=+34.795042045 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.513393 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.532333 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.544425 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.559098 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.571413 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.578730 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.578770 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.578781 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.578800 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.578811 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:25Z","lastTransitionTime":"2025-12-06T06:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.588172 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.600814 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.621937 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.639060 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.654212 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.667756 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.681879 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.681926 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.681939 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.681955 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.681965 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:25Z","lastTransitionTime":"2025-12-06T06:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.681972 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.694157 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.714227 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.730118 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.746017 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.770018 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.784192 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.784225 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.784233 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.784246 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.784257 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:25Z","lastTransitionTime":"2025-12-06T06:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.789613 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:25Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.885758 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.885803 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.885852 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.885870 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.885882 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:25Z","lastTransitionTime":"2025-12-06T06:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.988205 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.988247 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.988257 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.988276 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:25 crc kubenswrapper[4823]: I1206 06:25:25.988288 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:25Z","lastTransitionTime":"2025-12-06T06:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.090576 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.090608 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.090615 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.090628 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.090637 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:26Z","lastTransitionTime":"2025-12-06T06:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.140638 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:26 crc kubenswrapper[4823]: E1206 06:25:26.140800 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.193005 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.193042 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.193051 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.193065 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.193075 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:26Z","lastTransitionTime":"2025-12-06T06:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.295031 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.295068 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.295076 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.295090 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.295099 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:26Z","lastTransitionTime":"2025-12-06T06:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.322574 4823 generic.go:334] "Generic (PLEG): container finished" podID="22972156-629d-4bc6-8108-9f50b7416afc" containerID="5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6" exitCode=0 Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.322649 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" event={"ID":"22972156-629d-4bc6-8108-9f50b7416afc","Type":"ContainerDied","Data":"5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6"} Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.326965 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" event={"ID":"d7a8c395-bca0-48a5-bb35-10e956e85a2a","Type":"ContainerStarted","Data":"934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc"} Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.327006 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" event={"ID":"d7a8c395-bca0-48a5-bb35-10e956e85a2a","Type":"ContainerStarted","Data":"8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1"} Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.327024 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" event={"ID":"d7a8c395-bca0-48a5-bb35-10e956e85a2a","Type":"ContainerStarted","Data":"65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49"} Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.327037 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" event={"ID":"d7a8c395-bca0-48a5-bb35-10e956e85a2a","Type":"ContainerStarted","Data":"486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f"} Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.327049 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" event={"ID":"d7a8c395-bca0-48a5-bb35-10e956e85a2a","Type":"ContainerStarted","Data":"772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104"} Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.327061 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" event={"ID":"d7a8c395-bca0-48a5-bb35-10e956e85a2a","Type":"ContainerStarted","Data":"43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb"} Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.335131 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.355445 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.373120 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.388931 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.398855 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.398890 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.398898 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.398914 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.398922 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:26Z","lastTransitionTime":"2025-12-06T06:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.403914 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.416171 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.430261 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.442743 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.455638 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.468047 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.479508 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.495466 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.501299 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.501345 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.501359 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.501378 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.501390 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:26Z","lastTransitionTime":"2025-12-06T06:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.509260 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.526871 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:26Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.604512 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.604559 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.604568 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.604588 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.604602 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:26Z","lastTransitionTime":"2025-12-06T06:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.707285 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.707333 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.707346 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.707365 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.707376 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:26Z","lastTransitionTime":"2025-12-06T06:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.811132 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.811182 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.811197 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.811215 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.811225 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:26Z","lastTransitionTime":"2025-12-06T06:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.914397 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.914756 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.914768 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.914813 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:26 crc kubenswrapper[4823]: I1206 06:25:26.914826 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:26Z","lastTransitionTime":"2025-12-06T06:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.017623 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.017695 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.017707 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.017723 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.017734 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:27Z","lastTransitionTime":"2025-12-06T06:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.120332 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.120366 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.120373 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.120387 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.120396 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:27Z","lastTransitionTime":"2025-12-06T06:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.140737 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.140800 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:27 crc kubenswrapper[4823]: E1206 06:25:27.140914 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:25:27 crc kubenswrapper[4823]: E1206 06:25:27.140944 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.223614 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.223640 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.223649 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.223677 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.223689 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:27Z","lastTransitionTime":"2025-12-06T06:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.325954 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.325994 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.326005 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.326023 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.326035 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:27Z","lastTransitionTime":"2025-12-06T06:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.332064 4823 generic.go:334] "Generic (PLEG): container finished" podID="22972156-629d-4bc6-8108-9f50b7416afc" containerID="9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6" exitCode=0 Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.332117 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" event={"ID":"22972156-629d-4bc6-8108-9f50b7416afc","Type":"ContainerDied","Data":"9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6"} Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.346424 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.368285 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.388594 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.400372 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.412107 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.423963 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.430349 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.430384 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.430414 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.430428 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.430437 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:27Z","lastTransitionTime":"2025-12-06T06:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.434281 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.444438 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.457396 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.471267 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.486541 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.498497 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.512875 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.526384 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:27Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.533188 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.533221 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.533231 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.533245 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.533256 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:27Z","lastTransitionTime":"2025-12-06T06:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.635972 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.636009 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.636018 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.636031 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.636065 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:27Z","lastTransitionTime":"2025-12-06T06:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.738111 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.739100 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.739145 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.739164 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.739178 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:27Z","lastTransitionTime":"2025-12-06T06:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.842342 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.842381 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.842394 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.842408 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.842418 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:27Z","lastTransitionTime":"2025-12-06T06:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.945186 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.945264 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.945277 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.945292 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:27 crc kubenswrapper[4823]: I1206 06:25:27.945302 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:27Z","lastTransitionTime":"2025-12-06T06:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.047317 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.047358 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.047366 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.047381 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.047390 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:28Z","lastTransitionTime":"2025-12-06T06:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.140156 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:28 crc kubenswrapper[4823]: E1206 06:25:28.140290 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.150442 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.150491 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.150502 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.150518 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.150530 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:28Z","lastTransitionTime":"2025-12-06T06:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.253175 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.253215 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.253225 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.253240 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.253251 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:28Z","lastTransitionTime":"2025-12-06T06:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.339023 4823 generic.go:334] "Generic (PLEG): container finished" podID="22972156-629d-4bc6-8108-9f50b7416afc" containerID="5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba" exitCode=0 Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.339166 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" event={"ID":"22972156-629d-4bc6-8108-9f50b7416afc","Type":"ContainerDied","Data":"5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba"} Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.343719 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" event={"ID":"d7a8c395-bca0-48a5-bb35-10e956e85a2a","Type":"ContainerStarted","Data":"30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1"} Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.356078 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.356124 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.356137 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.356162 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.356177 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:28Z","lastTransitionTime":"2025-12-06T06:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.359333 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.376344 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.392797 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.406651 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.419024 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.437643 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.452781 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.462383 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.462431 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.462446 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.462477 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.462496 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:28Z","lastTransitionTime":"2025-12-06T06:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.468044 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.482939 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.497504 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.516050 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.538982 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.554745 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.564791 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.564830 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.564842 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.564860 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.564873 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:28Z","lastTransitionTime":"2025-12-06T06:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.572783 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:28Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.667354 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.667411 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.667423 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.667443 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.667452 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:28Z","lastTransitionTime":"2025-12-06T06:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.769897 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.769945 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.769960 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.769978 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.769990 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:28Z","lastTransitionTime":"2025-12-06T06:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.873176 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.873230 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.873240 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.873255 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.873266 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:28Z","lastTransitionTime":"2025-12-06T06:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.976493 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.976539 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.976552 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.976573 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:28 crc kubenswrapper[4823]: I1206 06:25:28.976592 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:28Z","lastTransitionTime":"2025-12-06T06:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.079450 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.079485 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.079493 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.079510 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.079519 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:29Z","lastTransitionTime":"2025-12-06T06:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.140105 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.140125 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:29 crc kubenswrapper[4823]: E1206 06:25:29.140230 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:25:29 crc kubenswrapper[4823]: E1206 06:25:29.140324 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.156801 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.170154 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.181596 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.181644 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.181655 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.181687 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.181699 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:29Z","lastTransitionTime":"2025-12-06T06:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.184390 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.194777 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.209574 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.227205 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.243055 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.255585 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.270056 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.287420 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.287454 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.287466 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.287500 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.287511 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:29Z","lastTransitionTime":"2025-12-06T06:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.288346 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.306082 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.329650 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.345222 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.355225 4823 generic.go:334] "Generic (PLEG): container finished" podID="22972156-629d-4bc6-8108-9f50b7416afc" containerID="09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418" exitCode=0 Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.355301 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" event={"ID":"22972156-629d-4bc6-8108-9f50b7416afc","Type":"ContainerDied","Data":"09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418"} Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.363908 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.385235 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.391032 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.391100 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.391116 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.391140 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.391180 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:29Z","lastTransitionTime":"2025-12-06T06:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.399962 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.415682 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.432920 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.451054 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.471095 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.487979 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.493383 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.493413 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.493423 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.493437 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.493447 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:29Z","lastTransitionTime":"2025-12-06T06:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.499071 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.512696 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.530760 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.543992 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.562171 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.574878 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.592401 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:29Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.596046 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.596076 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.596087 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.596104 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.596116 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:29Z","lastTransitionTime":"2025-12-06T06:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.699829 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.699873 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.699882 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.699898 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.699908 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:29Z","lastTransitionTime":"2025-12-06T06:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.802970 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.803023 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.803040 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.803061 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.803075 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:29Z","lastTransitionTime":"2025-12-06T06:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.906396 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.906436 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.906446 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.906464 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:29 crc kubenswrapper[4823]: I1206 06:25:29.906478 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:29Z","lastTransitionTime":"2025-12-06T06:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.009696 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.010158 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.010222 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.010294 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.010360 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:30Z","lastTransitionTime":"2025-12-06T06:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.113714 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.113756 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.113769 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.113787 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.113797 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:30Z","lastTransitionTime":"2025-12-06T06:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.140747 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:30 crc kubenswrapper[4823]: E1206 06:25:30.141103 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.216251 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.216653 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.216684 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.216709 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.216722 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:30Z","lastTransitionTime":"2025-12-06T06:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.319459 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.319522 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.319539 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.319563 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.319580 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:30Z","lastTransitionTime":"2025-12-06T06:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.384322 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" event={"ID":"d7a8c395-bca0-48a5-bb35-10e956e85a2a","Type":"ContainerStarted","Data":"b5fd34119863a2a7ad188da17a795cd62d26763eeaf3683a0704d8f74f97231f"} Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.385593 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.385623 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.385696 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.391069 4823 generic.go:334] "Generic (PLEG): container finished" podID="22972156-629d-4bc6-8108-9f50b7416afc" containerID="e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285" exitCode=0 Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.391115 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" event={"ID":"22972156-629d-4bc6-8108-9f50b7416afc","Type":"ContainerDied","Data":"e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285"} Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.415622 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.418293 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.423435 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.423477 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.423487 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.423506 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.423526 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:30Z","lastTransitionTime":"2025-12-06T06:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.423573 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.434972 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.452618 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.473793 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.492967 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.508786 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.528356 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.528692 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.528720 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.528729 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.528746 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.528758 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:30Z","lastTransitionTime":"2025-12-06T06:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.545352 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.561894 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.577760 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.601225 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.626437 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5fd34119863a2a7ad188da17a795cd62d26763eeaf3683a0704d8f74f97231f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.634052 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.634099 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.634151 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.634175 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.634186 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:30Z","lastTransitionTime":"2025-12-06T06:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.642188 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.659559 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.679353 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.699358 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.733855 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5fd34119863a2a7ad188da17a795cd62d26763eeaf3683a0704d8f74f97231f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.736592 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.736636 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.736650 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.736690 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.736705 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:30Z","lastTransitionTime":"2025-12-06T06:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.751904 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.774807 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.812498 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.825545 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.839645 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.839715 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.839727 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.839748 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.839758 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:30Z","lastTransitionTime":"2025-12-06T06:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.840958 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.859905 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.878781 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.893716 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.909332 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.927305 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.943685 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.943738 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.943750 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.943772 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.943785 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:30Z","lastTransitionTime":"2025-12-06T06:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:30 crc kubenswrapper[4823]: I1206 06:25:30.945519 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:30Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.047261 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.047312 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.047324 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.047344 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.047355 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:31Z","lastTransitionTime":"2025-12-06T06:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.140600 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:31 crc kubenswrapper[4823]: E1206 06:25:31.140834 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.140842 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:31 crc kubenswrapper[4823]: E1206 06:25:31.140939 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.150195 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.150608 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.150796 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.150900 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.150964 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:31Z","lastTransitionTime":"2025-12-06T06:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.254237 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.254392 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.254410 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.254441 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.254458 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:31Z","lastTransitionTime":"2025-12-06T06:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.357380 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.357450 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.357465 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.357492 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.357510 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:31Z","lastTransitionTime":"2025-12-06T06:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.398841 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" event={"ID":"22972156-629d-4bc6-8108-9f50b7416afc","Type":"ContainerStarted","Data":"56968214a054fa7fc3abca868820f1cd7dcc4f3a7cb1150d5e2940588eb2ba3f"} Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.416180 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:31Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.430317 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:31Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.448295 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:31Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.460500 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.460551 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.460560 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.460581 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.460592 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:31Z","lastTransitionTime":"2025-12-06T06:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.465792 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:31Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.482608 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:31Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.498766 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:31Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.512946 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:31Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.527854 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:31Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.541942 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:31Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.555652 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:31Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.563443 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.563495 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.563504 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.563518 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.563527 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:31Z","lastTransitionTime":"2025-12-06T06:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.577566 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56968214a054fa7fc3abca868820f1cd7dcc4f3a7cb1150d5e2940588eb2ba3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:31Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.596990 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5fd34119863a2a7ad188da17a795cd62d26763eeaf3683a0704d8f74f97231f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:31Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.610292 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:31Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.630057 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:31Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.666686 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.666741 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.666756 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.666781 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.666795 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:31Z","lastTransitionTime":"2025-12-06T06:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.770478 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.770574 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.770585 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.770604 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.770614 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:31Z","lastTransitionTime":"2025-12-06T06:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.873382 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.873753 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.873886 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.873969 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.874029 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:31Z","lastTransitionTime":"2025-12-06T06:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.977428 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.977473 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.977485 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.977508 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:31 crc kubenswrapper[4823]: I1206 06:25:31.977523 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:31Z","lastTransitionTime":"2025-12-06T06:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.080726 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.080775 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.080787 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.080802 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.080813 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:32Z","lastTransitionTime":"2025-12-06T06:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.139910 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:32 crc kubenswrapper[4823]: E1206 06:25:32.140067 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.182415 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.182651 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.182795 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.182872 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.182942 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:32Z","lastTransitionTime":"2025-12-06T06:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.270616 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.270725 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.270742 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.270775 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.270796 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:32Z","lastTransitionTime":"2025-12-06T06:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:32 crc kubenswrapper[4823]: E1206 06:25:32.287969 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:32Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.292159 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.292202 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.292212 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.292233 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.292256 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:32Z","lastTransitionTime":"2025-12-06T06:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:32 crc kubenswrapper[4823]: E1206 06:25:32.307481 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:32Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.311452 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.311852 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.311973 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.312067 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.312128 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:32Z","lastTransitionTime":"2025-12-06T06:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:32 crc kubenswrapper[4823]: E1206 06:25:32.326128 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:32Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.330242 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.330287 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.330298 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.330317 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.330328 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:32Z","lastTransitionTime":"2025-12-06T06:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:32 crc kubenswrapper[4823]: E1206 06:25:32.343562 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:32Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.348827 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.348894 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.348908 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.348930 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.348943 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:32Z","lastTransitionTime":"2025-12-06T06:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:32 crc kubenswrapper[4823]: E1206 06:25:32.362717 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:32Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:32 crc kubenswrapper[4823]: E1206 06:25:32.362883 4823 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.364763 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.364816 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.364830 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.364853 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.364868 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:32Z","lastTransitionTime":"2025-12-06T06:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.467563 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.467631 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.467644 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.467689 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.467705 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:32Z","lastTransitionTime":"2025-12-06T06:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.570894 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.570961 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.570978 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.571008 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.571025 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:32Z","lastTransitionTime":"2025-12-06T06:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.674618 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.674685 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.674698 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.674716 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.674730 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:32Z","lastTransitionTime":"2025-12-06T06:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.697187 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.716418 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:32Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.737318 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56968214a054fa7fc3abca868820f1cd7dcc4f3a7cb1150d5e2940588eb2ba3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:32Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.758911 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5fd34119863a2a7ad188da17a795cd62d26763eeaf3683a0704d8f74f97231f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:32Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.771450 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:32Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.776850 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.776889 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.776900 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.776917 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.776930 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:32Z","lastTransitionTime":"2025-12-06T06:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.788394 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:32Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.802952 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:32Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.816061 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:32Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.829282 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:32Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.845221 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:32Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.865101 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:32Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.879492 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.879548 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.879561 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.879583 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.879597 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:32Z","lastTransitionTime":"2025-12-06T06:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.883847 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:32Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.903351 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:32Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.919969 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:32Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.935320 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:32Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.983806 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.983857 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.983872 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.983895 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:32 crc kubenswrapper[4823]: I1206 06:25:32.983908 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:32Z","lastTransitionTime":"2025-12-06T06:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.086727 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.086787 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.086797 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.086817 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.086831 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:33Z","lastTransitionTime":"2025-12-06T06:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.140286 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.140312 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:33 crc kubenswrapper[4823]: E1206 06:25:33.140461 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:25:33 crc kubenswrapper[4823]: E1206 06:25:33.140532 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.189548 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.189590 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.189600 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.189617 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.189627 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:33Z","lastTransitionTime":"2025-12-06T06:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.292385 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.292428 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.292440 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.292457 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.292469 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:33Z","lastTransitionTime":"2025-12-06T06:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.395076 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.395128 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.395143 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.395160 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.395171 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:33Z","lastTransitionTime":"2025-12-06T06:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.497864 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.497913 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.497927 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.497942 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.497954 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:33Z","lastTransitionTime":"2025-12-06T06:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.600528 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.600580 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.600590 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.600609 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.600622 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:33Z","lastTransitionTime":"2025-12-06T06:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.604125 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.604280 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.604326 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.604356 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.604384 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:33 crc kubenswrapper[4823]: E1206 06:25:33.604524 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:25:33 crc kubenswrapper[4823]: E1206 06:25:33.604554 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:25:33 crc kubenswrapper[4823]: E1206 06:25:33.604567 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:33 crc kubenswrapper[4823]: E1206 06:25:33.604571 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:25:33 crc kubenswrapper[4823]: E1206 06:25:33.604616 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:49.604599888 +0000 UTC m=+50.890351848 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:33 crc kubenswrapper[4823]: E1206 06:25:33.604643 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:25:33 crc kubenswrapper[4823]: E1206 06:25:33.604688 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:49.604649859 +0000 UTC m=+50.890401819 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:25:33 crc kubenswrapper[4823]: E1206 06:25:33.604811 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:25:49.604760742 +0000 UTC m=+50.890512712 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:25:33 crc kubenswrapper[4823]: E1206 06:25:33.604850 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:49.604836785 +0000 UTC m=+50.890588925 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:25:33 crc kubenswrapper[4823]: E1206 06:25:33.604934 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:25:33 crc kubenswrapper[4823]: E1206 06:25:33.605008 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:25:33 crc kubenswrapper[4823]: E1206 06:25:33.605027 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:33 crc kubenswrapper[4823]: E1206 06:25:33.605123 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:49.605098412 +0000 UTC m=+50.890850372 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.704155 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.704188 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.704200 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.704225 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.704239 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:33Z","lastTransitionTime":"2025-12-06T06:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.807219 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.807264 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.807272 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.807291 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.807304 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:33Z","lastTransitionTime":"2025-12-06T06:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.910075 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.910134 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.910144 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.910163 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:33 crc kubenswrapper[4823]: I1206 06:25:33.910175 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:33Z","lastTransitionTime":"2025-12-06T06:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.012970 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.013013 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.013024 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.013044 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.013060 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:34Z","lastTransitionTime":"2025-12-06T06:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.117368 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.117422 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.117433 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.117457 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.117473 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:34Z","lastTransitionTime":"2025-12-06T06:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.140049 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:34 crc kubenswrapper[4823]: E1206 06:25:34.140450 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.220683 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.220746 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.220764 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.220793 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.220813 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:34Z","lastTransitionTime":"2025-12-06T06:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.323836 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.324116 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.324351 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.324542 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.324809 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:34Z","lastTransitionTime":"2025-12-06T06:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.412637 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rr4m5_d7a8c395-bca0-48a5-bb35-10e956e85a2a/ovnkube-controller/0.log" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.421451 4823 generic.go:334] "Generic (PLEG): container finished" podID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerID="b5fd34119863a2a7ad188da17a795cd62d26763eeaf3683a0704d8f74f97231f" exitCode=1 Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.421505 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" event={"ID":"d7a8c395-bca0-48a5-bb35-10e956e85a2a","Type":"ContainerDied","Data":"b5fd34119863a2a7ad188da17a795cd62d26763eeaf3683a0704d8f74f97231f"} Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.422395 4823 scope.go:117] "RemoveContainer" containerID="b5fd34119863a2a7ad188da17a795cd62d26763eeaf3683a0704d8f74f97231f" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.428181 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.428279 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.428346 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.428489 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.428564 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:34Z","lastTransitionTime":"2025-12-06T06:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.443200 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.460610 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.478162 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.491959 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.508210 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.529130 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.532465 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.532512 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.532521 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.532538 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.532549 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:34Z","lastTransitionTime":"2025-12-06T06:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.546081 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.562976 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.581369 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.599803 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.618775 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56968214a054fa7fc3abca868820f1cd7dcc4f3a7cb1150d5e2940588eb2ba3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.635874 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.635931 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.635947 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.635972 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.635984 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:34Z","lastTransitionTime":"2025-12-06T06:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.641029 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5fd34119863a2a7ad188da17a795cd62d26763eeaf3683a0704d8f74f97231f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5fd34119863a2a7ad188da17a795cd62d26763eeaf3683a0704d8f74f97231f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:25:34Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:25:33.143256 6125 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 06:25:33.143294 6125 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1206 06:25:33.143318 6125 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1206 06:25:33.143350 6125 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1206 06:25:33.143382 6125 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1206 06:25:33.143387 6125 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1206 06:25:33.143408 6125 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1206 06:25:33.143631 6125 factory.go:656] Stopping watch factory\\\\nI1206 06:25:33.143649 6125 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:25:33.143703 6125 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 06:25:33.143714 6125 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1206 06:25:33.143721 6125 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1206 06:25:33.143729 6125 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1206 06:25:33.143735 6125 handler.go:208] Removed *v1.Node event handler 2\\\\nI1206 06:25:33.144050 6125 metrics.go:553] Stopping metrics server at address \\\\\\\"127.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.654783 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.675843 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:34Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.738949 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.738995 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.739005 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.739024 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.739040 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:34Z","lastTransitionTime":"2025-12-06T06:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.841210 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.841270 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.841281 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.841301 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.841312 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:34Z","lastTransitionTime":"2025-12-06T06:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.943982 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.944017 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.944028 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.944045 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:34 crc kubenswrapper[4823]: I1206 06:25:34.944059 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:34Z","lastTransitionTime":"2025-12-06T06:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.046231 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.046264 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.046274 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.046290 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.046303 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:35Z","lastTransitionTime":"2025-12-06T06:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.139920 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.140082 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:35 crc kubenswrapper[4823]: E1206 06:25:35.140209 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:25:35 crc kubenswrapper[4823]: E1206 06:25:35.140309 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.148910 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.148939 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.148950 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.148964 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.148974 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:35Z","lastTransitionTime":"2025-12-06T06:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.251908 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.251949 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.251959 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.251975 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.251986 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:35Z","lastTransitionTime":"2025-12-06T06:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.357192 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.357474 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.357612 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.357719 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.357849 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:35Z","lastTransitionTime":"2025-12-06T06:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.428062 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rr4m5_d7a8c395-bca0-48a5-bb35-10e956e85a2a/ovnkube-controller/0.log" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.431557 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" event={"ID":"d7a8c395-bca0-48a5-bb35-10e956e85a2a","Type":"ContainerStarted","Data":"577fadfddaa42d928529b3090b29d975643be411f8754b66fff59dc371eab837"} Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.432216 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.449774 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:35Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.461554 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.461596 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.461606 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.461624 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.461636 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:35Z","lastTransitionTime":"2025-12-06T06:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.470758 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56968214a054fa7fc3abca868820f1cd7dcc4f3a7cb1150d5e2940588eb2ba3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:35Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.492819 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://577fadfddaa42d928529b3090b29d975643be411f8754b66fff59dc371eab837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5fd34119863a2a7ad188da17a795cd62d26763eeaf3683a0704d8f74f97231f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:25:34Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:25:33.143256 6125 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 06:25:33.143294 6125 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1206 06:25:33.143318 6125 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1206 06:25:33.143350 6125 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1206 06:25:33.143382 6125 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1206 06:25:33.143387 6125 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1206 06:25:33.143408 6125 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1206 06:25:33.143631 6125 factory.go:656] Stopping watch factory\\\\nI1206 06:25:33.143649 6125 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:25:33.143703 6125 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 06:25:33.143714 6125 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1206 06:25:33.143721 6125 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1206 06:25:33.143729 6125 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1206 06:25:33.143735 6125 handler.go:208] Removed *v1.Node event handler 2\\\\nI1206 06:25:33.144050 6125 metrics.go:553] Stopping metrics server at address \\\\\\\"127.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:35Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.506738 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:35Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.530283 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:35Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.546016 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:35Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.561897 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:35Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.563966 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.564012 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.564021 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.564039 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.564052 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:35Z","lastTransitionTime":"2025-12-06T06:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.584380 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:35Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.598331 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:35Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.612278 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:35Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.630084 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:35Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.649655 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:35Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.667154 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:35Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.675713 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.675763 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.675776 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.675799 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.675813 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:35Z","lastTransitionTime":"2025-12-06T06:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.685609 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:35Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.779524 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.779590 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.779611 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.779768 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.779797 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:35Z","lastTransitionTime":"2025-12-06T06:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.882912 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.882984 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.882999 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.883022 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.883037 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:35Z","lastTransitionTime":"2025-12-06T06:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.985533 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.985574 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.985586 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.985600 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:35 crc kubenswrapper[4823]: I1206 06:25:35.985610 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:35Z","lastTransitionTime":"2025-12-06T06:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.087970 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.088004 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.088012 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.088027 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.088037 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:36Z","lastTransitionTime":"2025-12-06T06:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.112771 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l"] Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.113517 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.116398 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.116638 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.133267 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.139694 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:36 crc kubenswrapper[4823]: E1206 06:25:36.139836 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.150427 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.165822 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.180586 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.190340 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.190389 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.190409 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.190424 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.190436 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:36Z","lastTransitionTime":"2025-12-06T06:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.196996 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.213118 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56968214a054fa7fc3abca868820f1cd7dcc4f3a7cb1150d5e2940588eb2ba3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.235218 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://577fadfddaa42d928529b3090b29d975643be411f8754b66fff59dc371eab837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5fd34119863a2a7ad188da17a795cd62d26763eeaf3683a0704d8f74f97231f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:25:34Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:25:33.143256 6125 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 06:25:33.143294 6125 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1206 06:25:33.143318 6125 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1206 06:25:33.143350 6125 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1206 06:25:33.143382 6125 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1206 06:25:33.143387 6125 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1206 06:25:33.143408 6125 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1206 06:25:33.143631 6125 factory.go:656] Stopping watch factory\\\\nI1206 06:25:33.143649 6125 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:25:33.143703 6125 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 06:25:33.143714 6125 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1206 06:25:33.143721 6125 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1206 06:25:33.143729 6125 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1206 06:25:33.143735 6125 handler.go:208] Removed *v1.Node event handler 2\\\\nI1206 06:25:33.144050 6125 metrics.go:553] Stopping metrics server at address \\\\\\\"127.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.236458 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e4a571bc-1fba-4a48-b611-5c8d7f46d357-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xbg5l\" (UID: \"e4a571bc-1fba-4a48-b611-5c8d7f46d357\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.236540 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlgkf\" (UniqueName: \"kubernetes.io/projected/e4a571bc-1fba-4a48-b611-5c8d7f46d357-kube-api-access-vlgkf\") pod \"ovnkube-control-plane-749d76644c-xbg5l\" (UID: \"e4a571bc-1fba-4a48-b611-5c8d7f46d357\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.236567 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e4a571bc-1fba-4a48-b611-5c8d7f46d357-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xbg5l\" (UID: \"e4a571bc-1fba-4a48-b611-5c8d7f46d357\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.236599 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e4a571bc-1fba-4a48-b611-5c8d7f46d357-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xbg5l\" (UID: \"e4a571bc-1fba-4a48-b611-5c8d7f46d357\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.249103 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.267643 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.286219 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.292431 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.292472 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.292484 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.292525 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.292540 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:36Z","lastTransitionTime":"2025-12-06T06:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.301857 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.316312 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.331456 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.337585 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlgkf\" (UniqueName: \"kubernetes.io/projected/e4a571bc-1fba-4a48-b611-5c8d7f46d357-kube-api-access-vlgkf\") pod \"ovnkube-control-plane-749d76644c-xbg5l\" (UID: \"e4a571bc-1fba-4a48-b611-5c8d7f46d357\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.337644 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e4a571bc-1fba-4a48-b611-5c8d7f46d357-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xbg5l\" (UID: \"e4a571bc-1fba-4a48-b611-5c8d7f46d357\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.337687 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e4a571bc-1fba-4a48-b611-5c8d7f46d357-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xbg5l\" (UID: \"e4a571bc-1fba-4a48-b611-5c8d7f46d357\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.337722 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e4a571bc-1fba-4a48-b611-5c8d7f46d357-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xbg5l\" (UID: \"e4a571bc-1fba-4a48-b611-5c8d7f46d357\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.338519 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e4a571bc-1fba-4a48-b611-5c8d7f46d357-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xbg5l\" (UID: \"e4a571bc-1fba-4a48-b611-5c8d7f46d357\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.338711 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e4a571bc-1fba-4a48-b611-5c8d7f46d357-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xbg5l\" (UID: \"e4a571bc-1fba-4a48-b611-5c8d7f46d357\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.344903 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e4a571bc-1fba-4a48-b611-5c8d7f46d357-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xbg5l\" (UID: \"e4a571bc-1fba-4a48-b611-5c8d7f46d357\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.347415 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4a571bc-1fba-4a48-b611-5c8d7f46d357\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xbg5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.360208 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlgkf\" (UniqueName: \"kubernetes.io/projected/e4a571bc-1fba-4a48-b611-5c8d7f46d357-kube-api-access-vlgkf\") pod \"ovnkube-control-plane-749d76644c-xbg5l\" (UID: \"e4a571bc-1fba-4a48-b611-5c8d7f46d357\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.368793 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:36Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.396630 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.396726 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.396741 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.396768 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.396784 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:36Z","lastTransitionTime":"2025-12-06T06:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.430881 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.499427 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.499757 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.499845 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.499932 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.500002 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:36Z","lastTransitionTime":"2025-12-06T06:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.602711 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.602759 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.602772 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.602791 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.602806 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:36Z","lastTransitionTime":"2025-12-06T06:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.706469 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.706514 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.706527 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.706552 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.706569 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:36Z","lastTransitionTime":"2025-12-06T06:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.810979 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.811024 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.811039 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.811059 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.811119 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:36Z","lastTransitionTime":"2025-12-06T06:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.913515 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.913553 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.913562 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.913578 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:36 crc kubenswrapper[4823]: I1206 06:25:36.913587 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:36Z","lastTransitionTime":"2025-12-06T06:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.015371 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.015406 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.015414 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.015427 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.015437 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:37Z","lastTransitionTime":"2025-12-06T06:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.117739 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.117789 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.117800 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.117819 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.117831 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:37Z","lastTransitionTime":"2025-12-06T06:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.139868 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.139912 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:37 crc kubenswrapper[4823]: E1206 06:25:37.140066 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:25:37 crc kubenswrapper[4823]: E1206 06:25:37.140155 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.220301 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.220369 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.220381 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.220401 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.220416 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:37Z","lastTransitionTime":"2025-12-06T06:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.322878 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.322910 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.322918 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.322931 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.322939 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:37Z","lastTransitionTime":"2025-12-06T06:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.424974 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.425261 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.425274 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.425286 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.425295 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:37Z","lastTransitionTime":"2025-12-06T06:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.438196 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" event={"ID":"e4a571bc-1fba-4a48-b611-5c8d7f46d357","Type":"ContainerStarted","Data":"1f1d8ccd00ccfeb4fd9bb99982e950b4c6cc3579e733a36473c52a57a46b6b93"} Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.439867 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rr4m5_d7a8c395-bca0-48a5-bb35-10e956e85a2a/ovnkube-controller/1.log" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.440473 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rr4m5_d7a8c395-bca0-48a5-bb35-10e956e85a2a/ovnkube-controller/0.log" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.442876 4823 generic.go:334] "Generic (PLEG): container finished" podID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerID="577fadfddaa42d928529b3090b29d975643be411f8754b66fff59dc371eab837" exitCode=1 Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.442896 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" event={"ID":"d7a8c395-bca0-48a5-bb35-10e956e85a2a","Type":"ContainerDied","Data":"577fadfddaa42d928529b3090b29d975643be411f8754b66fff59dc371eab837"} Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.442961 4823 scope.go:117] "RemoveContainer" containerID="b5fd34119863a2a7ad188da17a795cd62d26763eeaf3683a0704d8f74f97231f" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.443559 4823 scope.go:117] "RemoveContainer" containerID="577fadfddaa42d928529b3090b29d975643be411f8754b66fff59dc371eab837" Dec 06 06:25:37 crc kubenswrapper[4823]: E1206 06:25:37.443726 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rr4m5_openshift-ovn-kubernetes(d7a8c395-bca0-48a5-bb35-10e956e85a2a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.463274 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.475171 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.487905 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.501413 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.515379 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.527481 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.527527 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.527539 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.527556 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.527566 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:37Z","lastTransitionTime":"2025-12-06T06:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.533149 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56968214a054fa7fc3abca868820f1cd7dcc4f3a7cb1150d5e2940588eb2ba3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.557533 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://577fadfddaa42d928529b3090b29d975643be411f8754b66fff59dc371eab837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5fd34119863a2a7ad188da17a795cd62d26763eeaf3683a0704d8f74f97231f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:25:34Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:25:33.143256 6125 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 06:25:33.143294 6125 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1206 06:25:33.143318 6125 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1206 06:25:33.143350 6125 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1206 06:25:33.143382 6125 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1206 06:25:33.143387 6125 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1206 06:25:33.143408 6125 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1206 06:25:33.143631 6125 factory.go:656] Stopping watch factory\\\\nI1206 06:25:33.143649 6125 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:25:33.143703 6125 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 06:25:33.143714 6125 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1206 06:25:33.143721 6125 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1206 06:25:33.143729 6125 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1206 06:25:33.143735 6125 handler.go:208] Removed *v1.Node event handler 2\\\\nI1206 06:25:33.144050 6125 metrics.go:553] Stopping metrics server at address \\\\\\\"127.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://577fadfddaa42d928529b3090b29d975643be411f8754b66fff59dc371eab837\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"ailability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc006eed2cb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:8443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-api-operator,},ClusterIP:10.217.5.21,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.21],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1206 06:25:35.283843 6254 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.572858 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.591154 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.605818 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4a571bc-1fba-4a48-b611-5c8d7f46d357\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xbg5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.608633 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-57k6t"] Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.609190 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:25:37 crc kubenswrapper[4823]: E1206 06:25:37.609245 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.627386 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.629583 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.629627 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.629637 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.629674 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.629685 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:37Z","lastTransitionTime":"2025-12-06T06:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.643340 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.658752 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.670453 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.684598 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.700150 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.717353 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.733092 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.734599 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.734649 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.734685 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.734708 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.734722 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:37Z","lastTransitionTime":"2025-12-06T06:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.752744 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.755361 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zz4f\" (UniqueName: \"kubernetes.io/projected/5a2bb8a5-743e-42ed-9f30-850690a30e47-kube-api-access-9zz4f\") pod \"network-metrics-daemon-57k6t\" (UID: \"5a2bb8a5-743e-42ed-9f30-850690a30e47\") " pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.755536 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs\") pod \"network-metrics-daemon-57k6t\" (UID: \"5a2bb8a5-743e-42ed-9f30-850690a30e47\") " pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.783742 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57k6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2bb8a5-743e-42ed-9f30-850690a30e47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57k6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.807001 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.834912 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56968214a054fa7fc3abca868820f1cd7dcc4f3a7cb1150d5e2940588eb2ba3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.837537 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.837752 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.837875 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.838032 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.838175 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:37Z","lastTransitionTime":"2025-12-06T06:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.856612 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs\") pod \"network-metrics-daemon-57k6t\" (UID: \"5a2bb8a5-743e-42ed-9f30-850690a30e47\") " pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.856681 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zz4f\" (UniqueName: \"kubernetes.io/projected/5a2bb8a5-743e-42ed-9f30-850690a30e47-kube-api-access-9zz4f\") pod \"network-metrics-daemon-57k6t\" (UID: \"5a2bb8a5-743e-42ed-9f30-850690a30e47\") " pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:25:37 crc kubenswrapper[4823]: E1206 06:25:37.857093 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:25:37 crc kubenswrapper[4823]: E1206 06:25:37.857154 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs podName:5a2bb8a5-743e-42ed-9f30-850690a30e47 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:38.357137526 +0000 UTC m=+39.642889486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs") pod "network-metrics-daemon-57k6t" (UID: "5a2bb8a5-743e-42ed-9f30-850690a30e47") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.861938 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://577fadfddaa42d928529b3090b29d975643be411f8754b66fff59dc371eab837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5fd34119863a2a7ad188da17a795cd62d26763eeaf3683a0704d8f74f97231f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:25:34Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:25:33.143256 6125 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 06:25:33.143294 6125 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1206 06:25:33.143318 6125 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1206 06:25:33.143350 6125 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1206 06:25:33.143382 6125 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1206 06:25:33.143387 6125 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1206 06:25:33.143408 6125 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1206 06:25:33.143631 6125 factory.go:656] Stopping watch factory\\\\nI1206 06:25:33.143649 6125 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:25:33.143703 6125 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 06:25:33.143714 6125 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1206 06:25:33.143721 6125 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1206 06:25:33.143729 6125 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1206 06:25:33.143735 6125 handler.go:208] Removed *v1.Node event handler 2\\\\nI1206 06:25:33.144050 6125 metrics.go:553] Stopping metrics server at address \\\\\\\"127.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://577fadfddaa42d928529b3090b29d975643be411f8754b66fff59dc371eab837\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"ailability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc006eed2cb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:8443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-api-operator,},ClusterIP:10.217.5.21,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.21],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1206 06:25:35.283843 6254 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.874154 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.878613 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zz4f\" (UniqueName: \"kubernetes.io/projected/5a2bb8a5-743e-42ed-9f30-850690a30e47-kube-api-access-9zz4f\") pod \"network-metrics-daemon-57k6t\" (UID: \"5a2bb8a5-743e-42ed-9f30-850690a30e47\") " pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.891135 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.907466 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.922357 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4a571bc-1fba-4a48-b611-5c8d7f46d357\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xbg5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.938652 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.940720 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.940762 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.940775 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.940793 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.940805 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:37Z","lastTransitionTime":"2025-12-06T06:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.951736 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.967679 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:37 crc kubenswrapper[4823]: I1206 06:25:37.982176 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:37Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.043058 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.043087 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.043097 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.043111 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.043120 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:38Z","lastTransitionTime":"2025-12-06T06:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.140494 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:38 crc kubenswrapper[4823]: E1206 06:25:38.140648 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.145056 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.145185 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.145268 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.145340 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.145398 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:38Z","lastTransitionTime":"2025-12-06T06:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.248229 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.248462 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.248601 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.248757 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.248856 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:38Z","lastTransitionTime":"2025-12-06T06:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.351557 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.351593 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.351602 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.351618 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.351627 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:38Z","lastTransitionTime":"2025-12-06T06:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.360410 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs\") pod \"network-metrics-daemon-57k6t\" (UID: \"5a2bb8a5-743e-42ed-9f30-850690a30e47\") " pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:25:38 crc kubenswrapper[4823]: E1206 06:25:38.360564 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:25:38 crc kubenswrapper[4823]: E1206 06:25:38.360624 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs podName:5a2bb8a5-743e-42ed-9f30-850690a30e47 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:39.360606725 +0000 UTC m=+40.646358685 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs") pod "network-metrics-daemon-57k6t" (UID: "5a2bb8a5-743e-42ed-9f30-850690a30e47") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.447383 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" event={"ID":"e4a571bc-1fba-4a48-b611-5c8d7f46d357","Type":"ContainerStarted","Data":"ced9dcca911a59b1eb186462769dbf2016484f04083cc5e1139ee8ddbe472c3e"} Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.447431 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" event={"ID":"e4a571bc-1fba-4a48-b611-5c8d7f46d357","Type":"ContainerStarted","Data":"ee98d835469a8e1f219eb885362ddaf26d720cf7abd1d5643d860136e63b9d3c"} Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.449344 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rr4m5_d7a8c395-bca0-48a5-bb35-10e956e85a2a/ovnkube-controller/1.log" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.452979 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.453136 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.453208 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.453269 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.453339 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:38Z","lastTransitionTime":"2025-12-06T06:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.463611 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.475626 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.485583 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.497498 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.511381 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4a571bc-1fba-4a48-b611-5c8d7f46d357\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee98d835469a8e1f219eb885362ddaf26d720cf7abd1d5643d860136e63b9d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ced9dcca911a59b1eb186462769dbf2016484f04083cc5e1139ee8ddbe472c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xbg5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.530693 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.548841 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.555850 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.555916 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.555925 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.555939 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.555949 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:38Z","lastTransitionTime":"2025-12-06T06:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.563998 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.576087 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57k6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2bb8a5-743e-42ed-9f30-850690a30e47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57k6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.591972 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.605892 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.627256 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://577fadfddaa42d928529b3090b29d975643be411f8754b66fff59dc371eab837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5fd34119863a2a7ad188da17a795cd62d26763eeaf3683a0704d8f74f97231f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:25:34Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:25:33.143256 6125 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 06:25:33.143294 6125 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1206 06:25:33.143318 6125 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1206 06:25:33.143350 6125 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1206 06:25:33.143382 6125 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1206 06:25:33.143387 6125 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1206 06:25:33.143408 6125 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1206 06:25:33.143631 6125 factory.go:656] Stopping watch factory\\\\nI1206 06:25:33.143649 6125 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:25:33.143703 6125 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 06:25:33.143714 6125 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1206 06:25:33.143721 6125 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1206 06:25:33.143729 6125 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1206 06:25:33.143735 6125 handler.go:208] Removed *v1.Node event handler 2\\\\nI1206 06:25:33.144050 6125 metrics.go:553] Stopping metrics server at address \\\\\\\"127.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://577fadfddaa42d928529b3090b29d975643be411f8754b66fff59dc371eab837\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"ailability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc006eed2cb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:8443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-api-operator,},ClusterIP:10.217.5.21,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.21],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1206 06:25:35.283843 6254 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.643515 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.658338 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56968214a054fa7fc3abca868820f1cd7dcc4f3a7cb1150d5e2940588eb2ba3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.658539 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.658581 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.658592 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.658610 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.658622 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:38Z","lastTransitionTime":"2025-12-06T06:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.673972 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.685825 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:38Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.760694 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.760740 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.760751 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.760768 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.760780 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:38Z","lastTransitionTime":"2025-12-06T06:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.863257 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.863293 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.863305 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.863324 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.863338 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:38Z","lastTransitionTime":"2025-12-06T06:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.965323 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.965386 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.965398 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.965418 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:38 crc kubenswrapper[4823]: I1206 06:25:38.965431 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:38Z","lastTransitionTime":"2025-12-06T06:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.067980 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.068234 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.068339 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.068438 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.068505 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:39Z","lastTransitionTime":"2025-12-06T06:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.140636 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.140734 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.140798 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:39 crc kubenswrapper[4823]: E1206 06:25:39.140825 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:25:39 crc kubenswrapper[4823]: E1206 06:25:39.141223 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:25:39 crc kubenswrapper[4823]: E1206 06:25:39.141293 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.153890 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.166314 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.171972 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.172035 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.172050 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.172073 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.172090 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:39Z","lastTransitionTime":"2025-12-06T06:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.185902 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.202838 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.220973 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.234825 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.249828 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.261114 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4a571bc-1fba-4a48-b611-5c8d7f46d357\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee98d835469a8e1f219eb885362ddaf26d720cf7abd1d5643d860136e63b9d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ced9dcca911a59b1eb186462769dbf2016484f04083cc5e1139ee8ddbe472c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xbg5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.273704 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.275018 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.275104 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.275150 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.275172 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.275186 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:39Z","lastTransitionTime":"2025-12-06T06:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.286210 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.300287 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.312183 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57k6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2bb8a5-743e-42ed-9f30-850690a30e47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57k6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.325277 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.341418 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56968214a054fa7fc3abca868820f1cd7dcc4f3a7cb1150d5e2940588eb2ba3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.362714 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://577fadfddaa42d928529b3090b29d975643be411f8754b66fff59dc371eab837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5fd34119863a2a7ad188da17a795cd62d26763eeaf3683a0704d8f74f97231f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:25:34Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:25:33.143256 6125 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 06:25:33.143294 6125 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1206 06:25:33.143318 6125 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1206 06:25:33.143350 6125 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1206 06:25:33.143382 6125 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1206 06:25:33.143387 6125 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1206 06:25:33.143408 6125 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1206 06:25:33.143631 6125 factory.go:656] Stopping watch factory\\\\nI1206 06:25:33.143649 6125 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:25:33.143703 6125 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 06:25:33.143714 6125 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1206 06:25:33.143721 6125 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1206 06:25:33.143729 6125 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1206 06:25:33.143735 6125 handler.go:208] Removed *v1.Node event handler 2\\\\nI1206 06:25:33.144050 6125 metrics.go:553] Stopping metrics server at address \\\\\\\"127.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://577fadfddaa42d928529b3090b29d975643be411f8754b66fff59dc371eab837\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"ailability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc006eed2cb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:8443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-api-operator,},ClusterIP:10.217.5.21,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.21],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1206 06:25:35.283843 6254 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.369784 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs\") pod \"network-metrics-daemon-57k6t\" (UID: \"5a2bb8a5-743e-42ed-9f30-850690a30e47\") " pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:25:39 crc kubenswrapper[4823]: E1206 06:25:39.369957 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:25:39 crc kubenswrapper[4823]: E1206 06:25:39.370025 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs podName:5a2bb8a5-743e-42ed-9f30-850690a30e47 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:41.370006513 +0000 UTC m=+42.655758483 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs") pod "network-metrics-daemon-57k6t" (UID: "5a2bb8a5-743e-42ed-9f30-850690a30e47") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.378718 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.378771 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.378785 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.378806 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.378415 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:39Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.378820 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:39Z","lastTransitionTime":"2025-12-06T06:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.481980 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.482022 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.482035 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.482051 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.482062 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:39Z","lastTransitionTime":"2025-12-06T06:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.584346 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.584390 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.584401 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.584416 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.584426 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:39Z","lastTransitionTime":"2025-12-06T06:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.686391 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.686442 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.686452 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.686467 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.686478 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:39Z","lastTransitionTime":"2025-12-06T06:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.788880 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.788917 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.788925 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.788941 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.788950 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:39Z","lastTransitionTime":"2025-12-06T06:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.891737 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.891772 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.891781 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.891795 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.891806 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:39Z","lastTransitionTime":"2025-12-06T06:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.994430 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.994470 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.994478 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.994492 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:39 crc kubenswrapper[4823]: I1206 06:25:39.994500 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:39Z","lastTransitionTime":"2025-12-06T06:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.097072 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.097143 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.097157 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.097173 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.097182 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:40Z","lastTransitionTime":"2025-12-06T06:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.140757 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:40 crc kubenswrapper[4823]: E1206 06:25:40.140957 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.199718 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.199767 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.199780 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.199798 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.199841 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:40Z","lastTransitionTime":"2025-12-06T06:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.302327 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.302375 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.302385 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.302400 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.302410 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:40Z","lastTransitionTime":"2025-12-06T06:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.405038 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.405085 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.405095 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.405111 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.405120 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:40Z","lastTransitionTime":"2025-12-06T06:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.507415 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.507470 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.507483 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.507507 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.507519 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:40Z","lastTransitionTime":"2025-12-06T06:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.609931 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.609970 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.609985 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.610003 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.610016 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:40Z","lastTransitionTime":"2025-12-06T06:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.712984 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.713036 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.713048 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.713065 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.713076 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:40Z","lastTransitionTime":"2025-12-06T06:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.815512 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.815584 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.815604 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.815630 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.815648 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:40Z","lastTransitionTime":"2025-12-06T06:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.918902 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.918991 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.919009 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.919035 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:40 crc kubenswrapper[4823]: I1206 06:25:40.919053 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:40Z","lastTransitionTime":"2025-12-06T06:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.022031 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.022102 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.022113 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.022138 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.022608 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:41Z","lastTransitionTime":"2025-12-06T06:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.125937 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.126023 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.126043 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.126072 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.126088 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:41Z","lastTransitionTime":"2025-12-06T06:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.140574 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.140786 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:25:41 crc kubenswrapper[4823]: E1206 06:25:41.140868 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:25:41 crc kubenswrapper[4823]: E1206 06:25:41.140996 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.140634 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:41 crc kubenswrapper[4823]: E1206 06:25:41.141130 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.229054 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.229090 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.229098 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.229113 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.229122 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:41Z","lastTransitionTime":"2025-12-06T06:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.331730 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.331792 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.331801 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.331823 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.331837 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:41Z","lastTransitionTime":"2025-12-06T06:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.387083 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs\") pod \"network-metrics-daemon-57k6t\" (UID: \"5a2bb8a5-743e-42ed-9f30-850690a30e47\") " pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:25:41 crc kubenswrapper[4823]: E1206 06:25:41.387233 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:25:41 crc kubenswrapper[4823]: E1206 06:25:41.387299 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs podName:5a2bb8a5-743e-42ed-9f30-850690a30e47 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:45.387280384 +0000 UTC m=+46.673032354 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs") pod "network-metrics-daemon-57k6t" (UID: "5a2bb8a5-743e-42ed-9f30-850690a30e47") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.433887 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.433916 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.433924 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.433937 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.433946 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:41Z","lastTransitionTime":"2025-12-06T06:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.536436 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.536551 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.536568 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.536594 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.536605 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:41Z","lastTransitionTime":"2025-12-06T06:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.638723 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.638763 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.638771 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.638786 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.638797 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:41Z","lastTransitionTime":"2025-12-06T06:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.741186 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.741233 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.741244 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.741261 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.741272 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:41Z","lastTransitionTime":"2025-12-06T06:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.843654 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.843714 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.843728 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.843743 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.843752 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:41Z","lastTransitionTime":"2025-12-06T06:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.946169 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.946223 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.946233 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.946249 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:41 crc kubenswrapper[4823]: I1206 06:25:41.946258 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:41Z","lastTransitionTime":"2025-12-06T06:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.049009 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.049040 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.049051 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.049065 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.049073 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:42Z","lastTransitionTime":"2025-12-06T06:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.140198 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:42 crc kubenswrapper[4823]: E1206 06:25:42.140313 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.151175 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.151203 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.151211 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.151224 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.151232 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:42Z","lastTransitionTime":"2025-12-06T06:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.253583 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.253622 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.253631 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.253646 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.253655 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:42Z","lastTransitionTime":"2025-12-06T06:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.356521 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.356560 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.356569 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.356586 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.356597 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:42Z","lastTransitionTime":"2025-12-06T06:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.458683 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.458719 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.458730 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.458744 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.458755 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:42Z","lastTransitionTime":"2025-12-06T06:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.477593 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.477643 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.477654 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.477708 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.477720 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:42Z","lastTransitionTime":"2025-12-06T06:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:42 crc kubenswrapper[4823]: E1206 06:25:42.489471 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:42Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.493240 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.493361 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.493432 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.493504 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.493587 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:42Z","lastTransitionTime":"2025-12-06T06:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:42 crc kubenswrapper[4823]: E1206 06:25:42.504848 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:42Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.508800 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.508949 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.509020 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.509093 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.509158 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:42Z","lastTransitionTime":"2025-12-06T06:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:42 crc kubenswrapper[4823]: E1206 06:25:42.522651 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:42Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.525509 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.525694 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.525787 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.525864 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.525937 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:42Z","lastTransitionTime":"2025-12-06T06:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:42 crc kubenswrapper[4823]: E1206 06:25:42.537477 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:42Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.540481 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.540515 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.540525 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.540540 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.540550 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:42Z","lastTransitionTime":"2025-12-06T06:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:42 crc kubenswrapper[4823]: E1206 06:25:42.552807 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:42Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:42 crc kubenswrapper[4823]: E1206 06:25:42.552932 4823 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.561464 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.561501 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.561511 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.561527 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.561540 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:42Z","lastTransitionTime":"2025-12-06T06:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.664249 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.664287 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.664299 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.664315 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.664325 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:42Z","lastTransitionTime":"2025-12-06T06:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.766793 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.766856 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.766865 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.766879 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.766888 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:42Z","lastTransitionTime":"2025-12-06T06:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.869329 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.869380 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.869398 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.869416 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.869439 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:42Z","lastTransitionTime":"2025-12-06T06:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.972125 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.972186 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.972196 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.972212 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:42 crc kubenswrapper[4823]: I1206 06:25:42.972221 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:42Z","lastTransitionTime":"2025-12-06T06:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.074861 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.074918 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.074930 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.074948 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.074959 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:43Z","lastTransitionTime":"2025-12-06T06:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.140088 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.140126 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.140185 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:25:43 crc kubenswrapper[4823]: E1206 06:25:43.140272 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:25:43 crc kubenswrapper[4823]: E1206 06:25:43.140367 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:25:43 crc kubenswrapper[4823]: E1206 06:25:43.140542 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.180323 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.180373 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.180385 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.180403 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.180416 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:43Z","lastTransitionTime":"2025-12-06T06:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.285376 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.285437 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.285455 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.285473 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.285491 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:43Z","lastTransitionTime":"2025-12-06T06:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.388366 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.388406 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.388414 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.388428 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.388437 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:43Z","lastTransitionTime":"2025-12-06T06:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.491088 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.491152 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.491167 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.491189 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.491207 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:43Z","lastTransitionTime":"2025-12-06T06:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.593369 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.593404 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.593413 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.593427 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.593438 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:43Z","lastTransitionTime":"2025-12-06T06:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.695775 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.695811 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.695821 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.695836 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.695847 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:43Z","lastTransitionTime":"2025-12-06T06:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.798506 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.798545 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.798556 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.798574 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.798586 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:43Z","lastTransitionTime":"2025-12-06T06:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.900815 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.900852 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.900864 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.900881 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:43 crc kubenswrapper[4823]: I1206 06:25:43.900893 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:43Z","lastTransitionTime":"2025-12-06T06:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.003371 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.003410 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.003423 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.003442 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.003454 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:44Z","lastTransitionTime":"2025-12-06T06:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.106229 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.106273 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.106283 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.106301 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.106310 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:44Z","lastTransitionTime":"2025-12-06T06:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.140784 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:44 crc kubenswrapper[4823]: E1206 06:25:44.141015 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.208393 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.208428 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.208438 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.208451 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.208462 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:44Z","lastTransitionTime":"2025-12-06T06:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.311050 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.311098 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.311107 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.311123 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.311133 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:44Z","lastTransitionTime":"2025-12-06T06:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.414311 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.414806 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.414832 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.414851 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.414863 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:44Z","lastTransitionTime":"2025-12-06T06:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.517496 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.517538 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.517548 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.517565 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.517578 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:44Z","lastTransitionTime":"2025-12-06T06:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.619860 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.619899 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.619910 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.619927 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.619940 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:44Z","lastTransitionTime":"2025-12-06T06:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.722642 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.722687 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.722695 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.722707 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.722715 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:44Z","lastTransitionTime":"2025-12-06T06:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.825077 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.825147 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.825161 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.825186 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.825201 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:44Z","lastTransitionTime":"2025-12-06T06:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.928352 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.928410 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.928424 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.928444 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:44 crc kubenswrapper[4823]: I1206 06:25:44.928458 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:44Z","lastTransitionTime":"2025-12-06T06:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.031415 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.031452 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.031460 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.031474 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.031490 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:45Z","lastTransitionTime":"2025-12-06T06:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.137732 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.137773 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.137782 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.137797 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.137805 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:45Z","lastTransitionTime":"2025-12-06T06:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.140329 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.140363 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:45 crc kubenswrapper[4823]: E1206 06:25:45.140451 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.140517 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:25:45 crc kubenswrapper[4823]: E1206 06:25:45.140593 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:25:45 crc kubenswrapper[4823]: E1206 06:25:45.140728 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.240286 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.240337 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.240348 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.240363 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.240372 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:45Z","lastTransitionTime":"2025-12-06T06:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.344026 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.344073 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.344084 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.344105 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.344118 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:45Z","lastTransitionTime":"2025-12-06T06:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.447444 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.447506 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.447516 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.447550 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.447566 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:45Z","lastTransitionTime":"2025-12-06T06:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.486370 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs\") pod \"network-metrics-daemon-57k6t\" (UID: \"5a2bb8a5-743e-42ed-9f30-850690a30e47\") " pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:25:45 crc kubenswrapper[4823]: E1206 06:25:45.486546 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:25:45 crc kubenswrapper[4823]: E1206 06:25:45.486658 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs podName:5a2bb8a5-743e-42ed-9f30-850690a30e47 nodeName:}" failed. No retries permitted until 2025-12-06 06:25:53.486630967 +0000 UTC m=+54.772382927 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs") pod "network-metrics-daemon-57k6t" (UID: "5a2bb8a5-743e-42ed-9f30-850690a30e47") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.550804 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.550854 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.550865 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.550884 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.550898 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:45Z","lastTransitionTime":"2025-12-06T06:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.654819 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.654913 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.654929 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.654965 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.654987 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:45Z","lastTransitionTime":"2025-12-06T06:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.758012 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.758047 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.758064 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.758081 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.758093 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:45Z","lastTransitionTime":"2025-12-06T06:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.860620 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.860681 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.860693 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.860711 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.860723 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:45Z","lastTransitionTime":"2025-12-06T06:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.963168 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.963206 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.963219 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.963234 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:45 crc kubenswrapper[4823]: I1206 06:25:45.963247 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:45Z","lastTransitionTime":"2025-12-06T06:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.065551 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.065588 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.065601 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.065619 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.065630 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:46Z","lastTransitionTime":"2025-12-06T06:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.139989 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:46 crc kubenswrapper[4823]: E1206 06:25:46.140160 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.169475 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.169558 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.169569 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.169586 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.169596 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:46Z","lastTransitionTime":"2025-12-06T06:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.273180 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.273214 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.273222 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.273236 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.273245 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:46Z","lastTransitionTime":"2025-12-06T06:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.375692 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.375749 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.375761 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.375785 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.375799 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:46Z","lastTransitionTime":"2025-12-06T06:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.478451 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.478523 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.478537 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.478565 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.478581 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:46Z","lastTransitionTime":"2025-12-06T06:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.581052 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.581093 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.581102 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.581115 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.581126 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:46Z","lastTransitionTime":"2025-12-06T06:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.684137 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.684179 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.684191 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.684213 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.684226 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:46Z","lastTransitionTime":"2025-12-06T06:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.787109 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.787150 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.787162 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.787181 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.787195 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:46Z","lastTransitionTime":"2025-12-06T06:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.890651 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.890739 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.890750 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.890769 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.890780 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:46Z","lastTransitionTime":"2025-12-06T06:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.993506 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.993564 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.993574 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.993598 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:46 crc kubenswrapper[4823]: I1206 06:25:46.993611 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:46Z","lastTransitionTime":"2025-12-06T06:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.095782 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.095846 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.095857 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.095880 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.095891 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:47Z","lastTransitionTime":"2025-12-06T06:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.140550 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:47 crc kubenswrapper[4823]: E1206 06:25:47.140750 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.141113 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.140979 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:47 crc kubenswrapper[4823]: E1206 06:25:47.141286 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:25:47 crc kubenswrapper[4823]: E1206 06:25:47.141379 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.198753 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.198818 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.198844 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.198865 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.198873 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:47Z","lastTransitionTime":"2025-12-06T06:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.301800 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.301850 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.301860 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.301874 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.301883 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:47Z","lastTransitionTime":"2025-12-06T06:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.404754 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.404799 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.404812 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.404827 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.404839 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:47Z","lastTransitionTime":"2025-12-06T06:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.507198 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.507237 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.507250 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.507266 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.507277 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:47Z","lastTransitionTime":"2025-12-06T06:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.609922 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.609970 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.609981 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.609999 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.610011 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:47Z","lastTransitionTime":"2025-12-06T06:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.712437 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.712474 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.712482 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.712496 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.712505 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:47Z","lastTransitionTime":"2025-12-06T06:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.814526 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.814608 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.814627 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.814646 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.814670 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:47Z","lastTransitionTime":"2025-12-06T06:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.917026 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.917059 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.917069 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.917083 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:47 crc kubenswrapper[4823]: I1206 06:25:47.917092 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:47Z","lastTransitionTime":"2025-12-06T06:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.020038 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.020096 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.020108 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.020124 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.020153 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:48Z","lastTransitionTime":"2025-12-06T06:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.122244 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.122328 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.122340 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.122365 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.122381 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:48Z","lastTransitionTime":"2025-12-06T06:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.140423 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:48 crc kubenswrapper[4823]: E1206 06:25:48.140554 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.226234 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.226282 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.226294 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.226319 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.226332 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:48Z","lastTransitionTime":"2025-12-06T06:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.329815 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.329846 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.329855 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.329868 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.329877 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:48Z","lastTransitionTime":"2025-12-06T06:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.432185 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.432238 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.432290 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.432311 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.432323 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:48Z","lastTransitionTime":"2025-12-06T06:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.534783 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.534822 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.534832 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.534847 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.534859 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:48Z","lastTransitionTime":"2025-12-06T06:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.637273 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.637308 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.637317 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.637331 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.637340 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:48Z","lastTransitionTime":"2025-12-06T06:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.739604 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.739650 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.739693 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.739716 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.739734 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:48Z","lastTransitionTime":"2025-12-06T06:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.841936 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.841972 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.841983 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.841996 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.842005 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:48Z","lastTransitionTime":"2025-12-06T06:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.861612 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.872190 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.873354 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.886507 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.898336 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.908346 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.919625 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.931166 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4a571bc-1fba-4a48-b611-5c8d7f46d357\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee98d835469a8e1f219eb885362ddaf26d720cf7abd1d5643d860136e63b9d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ced9dcca911a59b1eb186462769dbf2016484f04083cc5e1139ee8ddbe472c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xbg5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.944425 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.944456 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.944465 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.944480 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.944491 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:48Z","lastTransitionTime":"2025-12-06T06:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.945101 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.955650 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.964628 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57k6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2bb8a5-743e-42ed-9f30-850690a30e47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57k6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.976310 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.986332 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:48 crc kubenswrapper[4823]: I1206 06:25:48.996075 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:48Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.008701 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.020843 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.034794 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56968214a054fa7fc3abca868820f1cd7dcc4f3a7cb1150d5e2940588eb2ba3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.046619 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.046653 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.046684 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.046704 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.046718 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:49Z","lastTransitionTime":"2025-12-06T06:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.052139 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://577fadfddaa42d928529b3090b29d975643be411f8754b66fff59dc371eab837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5fd34119863a2a7ad188da17a795cd62d26763eeaf3683a0704d8f74f97231f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:25:34Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:25:33.143256 6125 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 06:25:33.143294 6125 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1206 06:25:33.143318 6125 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1206 06:25:33.143350 6125 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1206 06:25:33.143382 6125 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1206 06:25:33.143387 6125 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1206 06:25:33.143408 6125 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1206 06:25:33.143631 6125 factory.go:656] Stopping watch factory\\\\nI1206 06:25:33.143649 6125 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:25:33.143703 6125 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 06:25:33.143714 6125 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1206 06:25:33.143721 6125 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1206 06:25:33.143729 6125 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1206 06:25:33.143735 6125 handler.go:208] Removed *v1.Node event handler 2\\\\nI1206 06:25:33.144050 6125 metrics.go:553] Stopping metrics server at address \\\\\\\"127.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://577fadfddaa42d928529b3090b29d975643be411f8754b66fff59dc371eab837\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"ailability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc006eed2cb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:8443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-api-operator,},ClusterIP:10.217.5.21,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.21],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1206 06:25:35.283843 6254 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.140358 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.140398 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.140476 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:25:49 crc kubenswrapper[4823]: E1206 06:25:49.140717 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:25:49 crc kubenswrapper[4823]: E1206 06:25:49.140768 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:25:49 crc kubenswrapper[4823]: E1206 06:25:49.140854 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.148893 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.149199 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.149786 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.149913 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.149994 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:49Z","lastTransitionTime":"2025-12-06T06:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.154438 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.165815 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.178179 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.188036 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c3e43f2-f912-4596-a7fa-e061dad8ad28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96825dd91cbf6e77075365211e7d310ec7f14d6e4045eff0195c70f2f6447185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dba2c0e710e8afd67e78b787d4caf972c1dbd9c20c7d4a263a6c104c7e07b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a277c59c5d4f466d6f64fd8243e1c2bdd0b10dafb7041876c073a8671bdcd4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f5f78905b2006625ba4f2b358eb6b341f8c89f7a3de175316f4609b35e86e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9f5f78905b2006625ba4f2b358eb6b341f8c89f7a3de175316f4609b35e86e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.201209 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.213779 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.224040 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.236225 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.247946 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4a571bc-1fba-4a48-b611-5c8d7f46d357\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee98d835469a8e1f219eb885362ddaf26d720cf7abd1d5643d860136e63b9d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ced9dcca911a59b1eb186462769dbf2016484f04083cc5e1139ee8ddbe472c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xbg5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.251974 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.252015 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.252024 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.252039 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.252049 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:49Z","lastTransitionTime":"2025-12-06T06:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.260351 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.270988 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.286918 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.302129 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.316032 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57k6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2bb8a5-743e-42ed-9f30-850690a30e47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57k6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.329937 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.344807 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56968214a054fa7fc3abca868820f1cd7dcc4f3a7cb1150d5e2940588eb2ba3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.355344 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.355395 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.355407 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.355426 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.355440 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:49Z","lastTransitionTime":"2025-12-06T06:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.362060 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://577fadfddaa42d928529b3090b29d975643be411f8754b66fff59dc371eab837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5fd34119863a2a7ad188da17a795cd62d26763eeaf3683a0704d8f74f97231f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:25:34Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:25:33.143256 6125 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 06:25:33.143294 6125 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1206 06:25:33.143318 6125 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1206 06:25:33.143350 6125 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1206 06:25:33.143382 6125 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1206 06:25:33.143387 6125 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1206 06:25:33.143408 6125 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1206 06:25:33.143631 6125 factory.go:656] Stopping watch factory\\\\nI1206 06:25:33.143649 6125 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:25:33.143703 6125 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 06:25:33.143714 6125 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1206 06:25:33.143721 6125 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1206 06:25:33.143729 6125 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1206 06:25:33.143735 6125 handler.go:208] Removed *v1.Node event handler 2\\\\nI1206 06:25:33.144050 6125 metrics.go:553] Stopping metrics server at address \\\\\\\"127.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://577fadfddaa42d928529b3090b29d975643be411f8754b66fff59dc371eab837\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"ailability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc006eed2cb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:8443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-api-operator,},ClusterIP:10.217.5.21,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.21],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1206 06:25:35.283843 6254 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:49Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.457807 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.457857 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.457870 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.457888 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.457900 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:49Z","lastTransitionTime":"2025-12-06T06:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.560382 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.560422 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.560432 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.560452 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.560463 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:49Z","lastTransitionTime":"2025-12-06T06:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.626116 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:25:49 crc kubenswrapper[4823]: E1206 06:25:49.626323 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:26:21.626293905 +0000 UTC m=+82.912045865 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.626368 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.626404 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.626437 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.626462 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:49 crc kubenswrapper[4823]: E1206 06:25:49.626605 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:25:49 crc kubenswrapper[4823]: E1206 06:25:49.626618 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:25:49 crc kubenswrapper[4823]: E1206 06:25:49.626628 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:49 crc kubenswrapper[4823]: E1206 06:25:49.626675 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 06:26:21.626651115 +0000 UTC m=+82.912403075 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:49 crc kubenswrapper[4823]: E1206 06:25:49.626710 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:25:49 crc kubenswrapper[4823]: E1206 06:25:49.626731 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:26:21.626725497 +0000 UTC m=+82.912477457 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:25:49 crc kubenswrapper[4823]: E1206 06:25:49.626768 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:25:49 crc kubenswrapper[4823]: E1206 06:25:49.626787 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:26:21.626781629 +0000 UTC m=+82.912533589 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:25:49 crc kubenswrapper[4823]: E1206 06:25:49.626855 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:25:49 crc kubenswrapper[4823]: E1206 06:25:49.626877 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:25:49 crc kubenswrapper[4823]: E1206 06:25:49.626889 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:49 crc kubenswrapper[4823]: E1206 06:25:49.626935 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 06:26:21.626920493 +0000 UTC m=+82.912672453 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.664817 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.664859 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.664869 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.664884 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.664894 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:49Z","lastTransitionTime":"2025-12-06T06:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.767649 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.767739 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.767763 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.767792 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.767813 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:49Z","lastTransitionTime":"2025-12-06T06:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.869906 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.869942 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.869960 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.869979 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.869991 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:49Z","lastTransitionTime":"2025-12-06T06:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.972331 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.972366 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.972376 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.972392 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:49 crc kubenswrapper[4823]: I1206 06:25:49.972401 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:49Z","lastTransitionTime":"2025-12-06T06:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.075272 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.075312 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.075323 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.075338 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.075348 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:50Z","lastTransitionTime":"2025-12-06T06:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.141105 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:50 crc kubenswrapper[4823]: E1206 06:25:50.141245 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.141491 4823 scope.go:117] "RemoveContainer" containerID="577fadfddaa42d928529b3090b29d975643be411f8754b66fff59dc371eab837" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.156694 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.170045 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.177923 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.177958 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.177966 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.177981 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.177990 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:50Z","lastTransitionTime":"2025-12-06T06:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.180973 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.192178 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4a571bc-1fba-4a48-b611-5c8d7f46d357\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee98d835469a8e1f219eb885362ddaf26d720cf7abd1d5643d860136e63b9d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ced9dcca911a59b1eb186462769dbf2016484f04083cc5e1139ee8ddbe472c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xbg5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.206970 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.218178 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c3e43f2-f912-4596-a7fa-e061dad8ad28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96825dd91cbf6e77075365211e7d310ec7f14d6e4045eff0195c70f2f6447185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dba2c0e710e8afd67e78b787d4caf972c1dbd9c20c7d4a263a6c104c7e07b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a277c59c5d4f466d6f64fd8243e1c2bdd0b10dafb7041876c073a8671bdcd4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f5f78905b2006625ba4f2b358eb6b341f8c89f7a3de175316f4609b35e86e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9f5f78905b2006625ba4f2b358eb6b341f8c89f7a3de175316f4609b35e86e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.230352 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.240324 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57k6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2bb8a5-743e-42ed-9f30-850690a30e47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57k6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.251353 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.261639 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.273995 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.280395 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.280536 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.280550 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.280592 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.280610 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:50Z","lastTransitionTime":"2025-12-06T06:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.287118 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.300750 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.315985 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56968214a054fa7fc3abca868820f1cd7dcc4f3a7cb1150d5e2940588eb2ba3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.336374 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://577fadfddaa42d928529b3090b29d975643be411f8754b66fff59dc371eab837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://577fadfddaa42d928529b3090b29d975643be411f8754b66fff59dc371eab837\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"ailability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc006eed2cb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:8443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-api-operator,},ClusterIP:10.217.5.21,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.21],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1206 06:25:35.283843 6254 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rr4m5_openshift-ovn-kubernetes(d7a8c395-bca0-48a5-bb35-10e956e85a2a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.348389 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.362799 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:50Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.384203 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.384246 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.384255 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.384269 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.384279 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:50Z","lastTransitionTime":"2025-12-06T06:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.488322 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.488376 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.488392 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.488412 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.488433 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:50Z","lastTransitionTime":"2025-12-06T06:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.591338 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.591374 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.591385 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.591400 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.591412 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:50Z","lastTransitionTime":"2025-12-06T06:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.694271 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.694306 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.694316 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.694329 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.694339 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:50Z","lastTransitionTime":"2025-12-06T06:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.796710 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.796756 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.796772 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.796787 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.796801 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:50Z","lastTransitionTime":"2025-12-06T06:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.899691 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.899742 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.899757 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.899774 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:50 crc kubenswrapper[4823]: I1206 06:25:50.899790 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:50Z","lastTransitionTime":"2025-12-06T06:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.002686 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.002737 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.002747 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.002763 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.002774 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:51Z","lastTransitionTime":"2025-12-06T06:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.105427 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.105462 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.105470 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.105483 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.105491 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:51Z","lastTransitionTime":"2025-12-06T06:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.140281 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.140357 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:51 crc kubenswrapper[4823]: E1206 06:25:51.140430 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:25:51 crc kubenswrapper[4823]: E1206 06:25:51.140489 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.140543 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:25:51 crc kubenswrapper[4823]: E1206 06:25:51.140612 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.208618 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.208706 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.208719 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.208736 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.208746 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:51Z","lastTransitionTime":"2025-12-06T06:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.311168 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.311201 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.311210 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.311223 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.311231 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:51Z","lastTransitionTime":"2025-12-06T06:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.413591 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.413626 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.413634 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.413647 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.413669 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:51Z","lastTransitionTime":"2025-12-06T06:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.490855 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rr4m5_d7a8c395-bca0-48a5-bb35-10e956e85a2a/ovnkube-controller/2.log" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.491721 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rr4m5_d7a8c395-bca0-48a5-bb35-10e956e85a2a/ovnkube-controller/1.log" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.495160 4823 generic.go:334] "Generic (PLEG): container finished" podID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerID="b53bdf33d43a42fb1812b2e8970cf652a5058714f807b995377a85e222a77bea" exitCode=1 Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.495199 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" event={"ID":"d7a8c395-bca0-48a5-bb35-10e956e85a2a","Type":"ContainerDied","Data":"b53bdf33d43a42fb1812b2e8970cf652a5058714f807b995377a85e222a77bea"} Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.495278 4823 scope.go:117] "RemoveContainer" containerID="577fadfddaa42d928529b3090b29d975643be411f8754b66fff59dc371eab837" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.496083 4823 scope.go:117] "RemoveContainer" containerID="b53bdf33d43a42fb1812b2e8970cf652a5058714f807b995377a85e222a77bea" Dec 06 06:25:51 crc kubenswrapper[4823]: E1206 06:25:51.496246 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rr4m5_openshift-ovn-kubernetes(d7a8c395-bca0-48a5-bb35-10e956e85a2a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.512626 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56968214a054fa7fc3abca868820f1cd7dcc4f3a7cb1150d5e2940588eb2ba3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.516296 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.516357 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.516368 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.516392 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.516407 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:51Z","lastTransitionTime":"2025-12-06T06:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.534491 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b53bdf33d43a42fb1812b2e8970cf652a5058714f807b995377a85e222a77bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://577fadfddaa42d928529b3090b29d975643be411f8754b66fff59dc371eab837\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"ailability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc006eed2cb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:8443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-api-operator,},ClusterIP:10.217.5.21,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.21],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1206 06:25:35.283843 6254 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b53bdf33d43a42fb1812b2e8970cf652a5058714f807b995377a85e222a77bea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:25:51Z\\\",\\\"message\\\":\\\"nt-go/informers/factory.go:160\\\\nI1206 06:25:51.303346 6479 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:25:51.303571 6479 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:25:51.303837 6479 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:25:51.304402 6479 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1206 06:25:51.304494 6479 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 06:25:51.304533 6479 factory.go:656] Stopping watch factory\\\\nI1206 06:25:51.304533 6479 handler.go:208] Removed *v1.Node event handler 2\\\\nI1206 06:25:51.304552 6479 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 06:25:51.322908 6479 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1206 06:25:51.322958 6479 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1206 06:25:51.323039 6479 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:25:51.323069 6479 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1206 06:25:51.323168 6479 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.547168 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.557084 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.572877 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.591107 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.607476 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c3e43f2-f912-4596-a7fa-e061dad8ad28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96825dd91cbf6e77075365211e7d310ec7f14d6e4045eff0195c70f2f6447185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dba2c0e710e8afd67e78b787d4caf972c1dbd9c20c7d4a263a6c104c7e07b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a277c59c5d4f466d6f64fd8243e1c2bdd0b10dafb7041876c073a8671bdcd4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f5f78905b2006625ba4f2b358eb6b341f8c89f7a3de175316f4609b35e86e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9f5f78905b2006625ba4f2b358eb6b341f8c89f7a3de175316f4609b35e86e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.620380 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.620425 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.620433 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.620449 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.620459 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:51Z","lastTransitionTime":"2025-12-06T06:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.622526 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.637306 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.652543 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.667257 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.678849 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4a571bc-1fba-4a48-b611-5c8d7f46d357\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee98d835469a8e1f219eb885362ddaf26d720cf7abd1d5643d860136e63b9d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ced9dcca911a59b1eb186462769dbf2016484f04083cc5e1139ee8ddbe472c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xbg5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.690305 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.703465 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.715891 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.722970 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.723011 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.723024 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.723039 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.723050 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:51Z","lastTransitionTime":"2025-12-06T06:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.728870 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57k6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2bb8a5-743e-42ed-9f30-850690a30e47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57k6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.747156 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:51Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.828824 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.829090 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.829176 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.829196 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.829206 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:51Z","lastTransitionTime":"2025-12-06T06:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.932428 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.932463 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.932471 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.932486 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:51 crc kubenswrapper[4823]: I1206 06:25:51.932494 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:51Z","lastTransitionTime":"2025-12-06T06:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.034603 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.034646 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.034657 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.034695 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.034708 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:52Z","lastTransitionTime":"2025-12-06T06:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.138419 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.138497 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.138519 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.138549 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.138571 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:52Z","lastTransitionTime":"2025-12-06T06:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.139883 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:52 crc kubenswrapper[4823]: E1206 06:25:52.140321 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.241212 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.241240 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.241248 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.241262 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.241272 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:52Z","lastTransitionTime":"2025-12-06T06:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.343626 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.343684 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.343695 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.343711 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.343721 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:52Z","lastTransitionTime":"2025-12-06T06:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.446573 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.446802 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.446870 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.446937 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.447027 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:52Z","lastTransitionTime":"2025-12-06T06:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.499491 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rr4m5_d7a8c395-bca0-48a5-bb35-10e956e85a2a/ovnkube-controller/2.log" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.550309 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.550360 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.550372 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.550391 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.550401 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:52Z","lastTransitionTime":"2025-12-06T06:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.654162 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.654209 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.654217 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.654233 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.654248 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:52Z","lastTransitionTime":"2025-12-06T06:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.756163 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.756205 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.756215 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.756231 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.756243 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:52Z","lastTransitionTime":"2025-12-06T06:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.782380 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.782782 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.782885 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.782962 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.783026 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:52Z","lastTransitionTime":"2025-12-06T06:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:52 crc kubenswrapper[4823]: E1206 06:25:52.795586 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:52Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.800996 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.801280 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.801399 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.801505 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.801593 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:52Z","lastTransitionTime":"2025-12-06T06:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:52 crc kubenswrapper[4823]: E1206 06:25:52.823239 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:52Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.841831 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.842165 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.842356 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.842467 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.842544 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:52Z","lastTransitionTime":"2025-12-06T06:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:52 crc kubenswrapper[4823]: E1206 06:25:52.865760 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:52Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.870788 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.870826 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.870838 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.870854 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.870865 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:52Z","lastTransitionTime":"2025-12-06T06:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:52 crc kubenswrapper[4823]: E1206 06:25:52.890104 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:52Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.895983 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.896032 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.896044 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.896064 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.896077 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:52Z","lastTransitionTime":"2025-12-06T06:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:52 crc kubenswrapper[4823]: E1206 06:25:52.912356 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:52Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:52 crc kubenswrapper[4823]: E1206 06:25:52.912535 4823 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.914332 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.914401 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.914414 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.914445 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:52 crc kubenswrapper[4823]: I1206 06:25:52.914460 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:52Z","lastTransitionTime":"2025-12-06T06:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.016973 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.017011 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.017022 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.017038 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.017048 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:53Z","lastTransitionTime":"2025-12-06T06:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.119555 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.120111 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.120180 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.120241 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.120296 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:53Z","lastTransitionTime":"2025-12-06T06:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.140922 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:53 crc kubenswrapper[4823]: E1206 06:25:53.141048 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.141200 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:53 crc kubenswrapper[4823]: E1206 06:25:53.141243 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.141444 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:25:53 crc kubenswrapper[4823]: E1206 06:25:53.141590 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.223044 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.223379 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.223477 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.223564 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.223649 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:53Z","lastTransitionTime":"2025-12-06T06:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.325968 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.326053 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.326068 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.326084 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.326096 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:53Z","lastTransitionTime":"2025-12-06T06:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.428691 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.429025 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.429085 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.429152 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.429266 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:53Z","lastTransitionTime":"2025-12-06T06:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.532158 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.532195 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.532206 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.532221 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.532230 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:53Z","lastTransitionTime":"2025-12-06T06:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.569130 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs\") pod \"network-metrics-daemon-57k6t\" (UID: \"5a2bb8a5-743e-42ed-9f30-850690a30e47\") " pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:25:53 crc kubenswrapper[4823]: E1206 06:25:53.569260 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:25:53 crc kubenswrapper[4823]: E1206 06:25:53.569311 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs podName:5a2bb8a5-743e-42ed-9f30-850690a30e47 nodeName:}" failed. No retries permitted until 2025-12-06 06:26:09.569297143 +0000 UTC m=+70.855049103 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs") pod "network-metrics-daemon-57k6t" (UID: "5a2bb8a5-743e-42ed-9f30-850690a30e47") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.635656 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.635712 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.635721 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.635738 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.635748 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:53Z","lastTransitionTime":"2025-12-06T06:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.739763 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.739820 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.739842 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.739875 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.739885 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:53Z","lastTransitionTime":"2025-12-06T06:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.842863 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.843379 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.843483 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.843589 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.843749 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:53Z","lastTransitionTime":"2025-12-06T06:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.946311 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.946393 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.946404 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.946432 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:53 crc kubenswrapper[4823]: I1206 06:25:53.946445 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:53Z","lastTransitionTime":"2025-12-06T06:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.049996 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.050028 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.050036 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.050050 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.050059 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:54Z","lastTransitionTime":"2025-12-06T06:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.140266 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:54 crc kubenswrapper[4823]: E1206 06:25:54.140395 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.152221 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.152269 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.152281 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.152301 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.152313 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:54Z","lastTransitionTime":"2025-12-06T06:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.254494 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.254558 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.254574 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.254597 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.254616 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:54Z","lastTransitionTime":"2025-12-06T06:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.357328 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.357592 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.357730 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.357829 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.357901 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:54Z","lastTransitionTime":"2025-12-06T06:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.460543 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.460805 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.460904 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.460999 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.461076 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:54Z","lastTransitionTime":"2025-12-06T06:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.564613 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.564923 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.565049 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.565201 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.565301 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:54Z","lastTransitionTime":"2025-12-06T06:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.668594 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.668643 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.668657 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.668700 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.668713 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:54Z","lastTransitionTime":"2025-12-06T06:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.771434 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.771474 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.771484 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.771497 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.771507 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:54Z","lastTransitionTime":"2025-12-06T06:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.874147 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.874198 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.874210 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.874235 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.874249 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:54Z","lastTransitionTime":"2025-12-06T06:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.976433 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.976478 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.976487 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.976503 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:54 crc kubenswrapper[4823]: I1206 06:25:54.976512 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:54Z","lastTransitionTime":"2025-12-06T06:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.078756 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.078994 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.079005 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.079020 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.079032 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:55Z","lastTransitionTime":"2025-12-06T06:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.140006 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.140161 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.140183 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:55 crc kubenswrapper[4823]: E1206 06:25:55.140227 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:25:55 crc kubenswrapper[4823]: E1206 06:25:55.140243 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:25:55 crc kubenswrapper[4823]: E1206 06:25:55.140323 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.181218 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.181257 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.181275 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.181294 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.181306 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:55Z","lastTransitionTime":"2025-12-06T06:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.192628 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.193333 4823 scope.go:117] "RemoveContainer" containerID="b53bdf33d43a42fb1812b2e8970cf652a5058714f807b995377a85e222a77bea" Dec 06 06:25:55 crc kubenswrapper[4823]: E1206 06:25:55.193486 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rr4m5_openshift-ovn-kubernetes(d7a8c395-bca0-48a5-bb35-10e956e85a2a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.204893 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57k6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2bb8a5-743e-42ed-9f30-850690a30e47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57k6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:55Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.218507 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:55Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.233590 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:55Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.251251 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:55Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.280941 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:55Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.284417 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.284633 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.284744 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.284830 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.284934 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:55Z","lastTransitionTime":"2025-12-06T06:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.298506 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:55Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.317550 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56968214a054fa7fc3abca868820f1cd7dcc4f3a7cb1150d5e2940588eb2ba3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:55Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.339554 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b53bdf33d43a42fb1812b2e8970cf652a5058714f807b995377a85e222a77bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b53bdf33d43a42fb1812b2e8970cf652a5058714f807b995377a85e222a77bea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:25:51Z\\\",\\\"message\\\":\\\"nt-go/informers/factory.go:160\\\\nI1206 06:25:51.303346 6479 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:25:51.303571 6479 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:25:51.303837 6479 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:25:51.304402 6479 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1206 06:25:51.304494 6479 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 06:25:51.304533 6479 factory.go:656] Stopping watch factory\\\\nI1206 06:25:51.304533 6479 handler.go:208] Removed *v1.Node event handler 2\\\\nI1206 06:25:51.304552 6479 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 06:25:51.322908 6479 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1206 06:25:51.322958 6479 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1206 06:25:51.323039 6479 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:25:51.323069 6479 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1206 06:25:51.323168 6479 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rr4m5_openshift-ovn-kubernetes(d7a8c395-bca0-48a5-bb35-10e956e85a2a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:55Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.350934 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:55Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.361826 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:55Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.374915 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:55Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.384366 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:55Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.386995 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.387019 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.387028 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.387040 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.387050 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:55Z","lastTransitionTime":"2025-12-06T06:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.393990 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:55Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.403087 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4a571bc-1fba-4a48-b611-5c8d7f46d357\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee98d835469a8e1f219eb885362ddaf26d720cf7abd1d5643d860136e63b9d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ced9dcca911a59b1eb186462769dbf2016484f04083cc5e1139ee8ddbe472c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xbg5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:55Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.416878 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:55Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.426472 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c3e43f2-f912-4596-a7fa-e061dad8ad28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96825dd91cbf6e77075365211e7d310ec7f14d6e4045eff0195c70f2f6447185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dba2c0e710e8afd67e78b787d4caf972c1dbd9c20c7d4a263a6c104c7e07b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a277c59c5d4f466d6f64fd8243e1c2bdd0b10dafb7041876c073a8671bdcd4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f5f78905b2006625ba4f2b358eb6b341f8c89f7a3de175316f4609b35e86e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9f5f78905b2006625ba4f2b358eb6b341f8c89f7a3de175316f4609b35e86e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:55Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.437439 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:55Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.489694 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.489731 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.489740 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.489756 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.489767 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:55Z","lastTransitionTime":"2025-12-06T06:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.592815 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.592879 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.592891 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.592917 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.592929 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:55Z","lastTransitionTime":"2025-12-06T06:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.694575 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.694620 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.694629 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.694643 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.694655 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:55Z","lastTransitionTime":"2025-12-06T06:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.797124 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.797171 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.797182 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.797199 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.797210 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:55Z","lastTransitionTime":"2025-12-06T06:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.899519 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.899558 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.899568 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.899581 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:55 crc kubenswrapper[4823]: I1206 06:25:55.899589 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:55Z","lastTransitionTime":"2025-12-06T06:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.002309 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.002370 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.002379 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.002394 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.002408 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:56Z","lastTransitionTime":"2025-12-06T06:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.104330 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.104365 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.104376 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.104391 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.104402 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:56Z","lastTransitionTime":"2025-12-06T06:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.140094 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:56 crc kubenswrapper[4823]: E1206 06:25:56.140246 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.205887 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.205925 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.205937 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.205954 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.205965 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:56Z","lastTransitionTime":"2025-12-06T06:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.308286 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.308329 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.308343 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.308361 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.308371 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:56Z","lastTransitionTime":"2025-12-06T06:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.410416 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.410454 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.410467 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.410482 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.410493 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:56Z","lastTransitionTime":"2025-12-06T06:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.512971 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.513014 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.513024 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.513041 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.513052 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:56Z","lastTransitionTime":"2025-12-06T06:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.616004 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.616032 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.616041 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.616053 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.616063 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:56Z","lastTransitionTime":"2025-12-06T06:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.718650 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.719306 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.719319 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.719331 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.719339 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:56Z","lastTransitionTime":"2025-12-06T06:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.821998 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.822031 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.822041 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.822054 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.822062 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:56Z","lastTransitionTime":"2025-12-06T06:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.924800 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.924856 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.924866 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.924883 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:56 crc kubenswrapper[4823]: I1206 06:25:56.924893 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:56Z","lastTransitionTime":"2025-12-06T06:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.027311 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.027341 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.027352 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.027367 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.027379 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:57Z","lastTransitionTime":"2025-12-06T06:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.130323 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.130366 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.130375 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.130392 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.130402 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:57Z","lastTransitionTime":"2025-12-06T06:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.140965 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.141042 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.141171 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:25:57 crc kubenswrapper[4823]: E1206 06:25:57.141219 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:25:57 crc kubenswrapper[4823]: E1206 06:25:57.141383 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:25:57 crc kubenswrapper[4823]: E1206 06:25:57.141481 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.233429 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.233523 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.233548 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.233580 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.233600 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:57Z","lastTransitionTime":"2025-12-06T06:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.335541 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.335820 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.335899 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.335966 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.336027 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:57Z","lastTransitionTime":"2025-12-06T06:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.438021 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.438096 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.438111 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.438135 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.438150 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:57Z","lastTransitionTime":"2025-12-06T06:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.540346 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.540383 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.540392 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.540407 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.540418 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:57Z","lastTransitionTime":"2025-12-06T06:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.643244 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.643300 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.643312 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.643330 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.643341 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:57Z","lastTransitionTime":"2025-12-06T06:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.746216 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.746307 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.746321 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.746347 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.746364 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:57Z","lastTransitionTime":"2025-12-06T06:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.850020 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.850101 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.850114 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.850138 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.850154 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:57Z","lastTransitionTime":"2025-12-06T06:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.952069 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.952116 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.952128 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.952143 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:57 crc kubenswrapper[4823]: I1206 06:25:57.952155 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:57Z","lastTransitionTime":"2025-12-06T06:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.054558 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.054595 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.054605 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.054619 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.054628 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:58Z","lastTransitionTime":"2025-12-06T06:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.140309 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:25:58 crc kubenswrapper[4823]: E1206 06:25:58.140452 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.156829 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.156859 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.156872 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.156893 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.156905 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:58Z","lastTransitionTime":"2025-12-06T06:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.258729 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.259040 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.259117 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.259199 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.259266 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:58Z","lastTransitionTime":"2025-12-06T06:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.361282 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.361332 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.361345 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.361360 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.361369 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:58Z","lastTransitionTime":"2025-12-06T06:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.464097 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.464137 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.464146 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.464159 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.464168 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:58Z","lastTransitionTime":"2025-12-06T06:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.567124 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.567170 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.567182 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.567198 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.567212 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:58Z","lastTransitionTime":"2025-12-06T06:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.670560 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.671079 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.671256 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.671424 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.671650 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:58Z","lastTransitionTime":"2025-12-06T06:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.774031 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.774069 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.774079 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.774095 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.774107 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:58Z","lastTransitionTime":"2025-12-06T06:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.876256 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.876288 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.876297 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.876316 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.876332 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:58Z","lastTransitionTime":"2025-12-06T06:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.978893 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.978927 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.978935 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.978946 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:58 crc kubenswrapper[4823]: I1206 06:25:58.978955 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:58Z","lastTransitionTime":"2025-12-06T06:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.081470 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.081509 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.081520 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.081535 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.081547 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:59Z","lastTransitionTime":"2025-12-06T06:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.140248 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:25:59 crc kubenswrapper[4823]: E1206 06:25:59.140423 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.140440 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.140526 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:25:59 crc kubenswrapper[4823]: E1206 06:25:59.140582 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:25:59 crc kubenswrapper[4823]: E1206 06:25:59.140699 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.150190 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:59Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.162016 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:59Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.177378 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:59Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.183499 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.183537 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.183551 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.183567 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.183579 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:59Z","lastTransitionTime":"2025-12-06T06:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.188539 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c3e43f2-f912-4596-a7fa-e061dad8ad28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96825dd91cbf6e77075365211e7d310ec7f14d6e4045eff0195c70f2f6447185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dba2c0e710e8afd67e78b787d4caf972c1dbd9c20c7d4a263a6c104c7e07b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a277c59c5d4f466d6f64fd8243e1c2bdd0b10dafb7041876c073a8671bdcd4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f5f78905b2006625ba4f2b358eb6b341f8c89f7a3de175316f4609b35e86e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9f5f78905b2006625ba4f2b358eb6b341f8c89f7a3de175316f4609b35e86e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:59Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.200427 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:59Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.214192 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:59Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.226790 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:59Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.243942 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:59Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.254858 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4a571bc-1fba-4a48-b611-5c8d7f46d357\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee98d835469a8e1f219eb885362ddaf26d720cf7abd1d5643d860136e63b9d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ced9dcca911a59b1eb186462769dbf2016484f04083cc5e1139ee8ddbe472c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xbg5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:59Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.266211 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:59Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.277074 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:59Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.287380 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.287416 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.287424 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.287443 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.287452 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:59Z","lastTransitionTime":"2025-12-06T06:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.292161 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:59Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.304847 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:59Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.314776 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57k6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2bb8a5-743e-42ed-9f30-850690a30e47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57k6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:59Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.327498 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:59Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.340693 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56968214a054fa7fc3abca868820f1cd7dcc4f3a7cb1150d5e2940588eb2ba3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:59Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.358098 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b53bdf33d43a42fb1812b2e8970cf652a5058714f807b995377a85e222a77bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b53bdf33d43a42fb1812b2e8970cf652a5058714f807b995377a85e222a77bea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:25:51Z\\\",\\\"message\\\":\\\"nt-go/informers/factory.go:160\\\\nI1206 06:25:51.303346 6479 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:25:51.303571 6479 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:25:51.303837 6479 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:25:51.304402 6479 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1206 06:25:51.304494 6479 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 06:25:51.304533 6479 factory.go:656] Stopping watch factory\\\\nI1206 06:25:51.304533 6479 handler.go:208] Removed *v1.Node event handler 2\\\\nI1206 06:25:51.304552 6479 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 06:25:51.322908 6479 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1206 06:25:51.322958 6479 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1206 06:25:51.323039 6479 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:25:51.323069 6479 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1206 06:25:51.323168 6479 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rr4m5_openshift-ovn-kubernetes(d7a8c395-bca0-48a5-bb35-10e956e85a2a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:25:59Z is after 2025-08-24T17:21:41Z" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.389595 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.389672 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.389685 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.389705 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.389717 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:59Z","lastTransitionTime":"2025-12-06T06:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.492135 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.492229 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.492249 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.492281 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.492302 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:59Z","lastTransitionTime":"2025-12-06T06:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.594906 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.594941 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.594951 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.594965 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.594974 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:59Z","lastTransitionTime":"2025-12-06T06:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.697591 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.697618 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.697626 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.697637 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.697646 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:59Z","lastTransitionTime":"2025-12-06T06:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.800521 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.800551 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.800560 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.800573 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.800581 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:59Z","lastTransitionTime":"2025-12-06T06:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.903491 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.904035 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.904046 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.904069 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:25:59 crc kubenswrapper[4823]: I1206 06:25:59.904081 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:25:59Z","lastTransitionTime":"2025-12-06T06:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.006907 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.006974 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.006988 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.007014 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.007030 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:00Z","lastTransitionTime":"2025-12-06T06:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.110574 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.110635 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.110648 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.110688 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.110702 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:00Z","lastTransitionTime":"2025-12-06T06:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.140636 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:00 crc kubenswrapper[4823]: E1206 06:26:00.140858 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.213058 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.213132 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.213146 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.213162 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.213172 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:00Z","lastTransitionTime":"2025-12-06T06:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.315682 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.315729 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.315740 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.315755 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.315764 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:00Z","lastTransitionTime":"2025-12-06T06:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.417452 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.417492 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.417501 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.417515 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.417525 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:00Z","lastTransitionTime":"2025-12-06T06:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.520424 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.520462 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.520472 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.520486 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.520494 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:00Z","lastTransitionTime":"2025-12-06T06:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.622509 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.622541 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.622550 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.622563 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.622573 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:00Z","lastTransitionTime":"2025-12-06T06:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.725426 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.725466 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.725476 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.725491 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.725500 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:00Z","lastTransitionTime":"2025-12-06T06:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.827949 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.827997 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.828010 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.828026 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.828038 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:00Z","lastTransitionTime":"2025-12-06T06:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.929992 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.930038 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.930050 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.930068 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:00 crc kubenswrapper[4823]: I1206 06:26:00.930079 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:00Z","lastTransitionTime":"2025-12-06T06:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.032547 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.032586 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.032597 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.032613 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.032624 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:01Z","lastTransitionTime":"2025-12-06T06:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.134375 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.134418 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.134430 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.134447 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.134460 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:01Z","lastTransitionTime":"2025-12-06T06:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.140704 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.140747 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.140785 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:01 crc kubenswrapper[4823]: E1206 06:26:01.140877 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:01 crc kubenswrapper[4823]: E1206 06:26:01.141005 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:01 crc kubenswrapper[4823]: E1206 06:26:01.141082 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.236925 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.237533 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.237549 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.237564 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.237574 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:01Z","lastTransitionTime":"2025-12-06T06:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.339855 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.339899 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.339911 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.339926 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.339938 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:01Z","lastTransitionTime":"2025-12-06T06:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.442375 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.442413 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.442421 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.442435 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.442445 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:01Z","lastTransitionTime":"2025-12-06T06:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.544591 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.544635 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.544644 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.544678 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.544689 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:01Z","lastTransitionTime":"2025-12-06T06:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.648119 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.648208 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.648241 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.648280 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.648299 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:01Z","lastTransitionTime":"2025-12-06T06:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.751016 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.751048 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.751057 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.751074 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.751088 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:01Z","lastTransitionTime":"2025-12-06T06:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.854328 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.854565 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.854647 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.854820 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.854911 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:01Z","lastTransitionTime":"2025-12-06T06:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.957603 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.957806 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.957827 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.957843 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:01 crc kubenswrapper[4823]: I1206 06:26:01.957857 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:01Z","lastTransitionTime":"2025-12-06T06:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.060925 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.062985 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.063001 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.063030 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.063043 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:02Z","lastTransitionTime":"2025-12-06T06:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.140219 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:02 crc kubenswrapper[4823]: E1206 06:26:02.140610 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.165418 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.165459 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.165484 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.165500 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.165512 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:02Z","lastTransitionTime":"2025-12-06T06:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.267784 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.267828 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.267839 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.267915 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.267929 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:02Z","lastTransitionTime":"2025-12-06T06:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.370065 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.370112 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.370125 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.370142 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.370152 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:02Z","lastTransitionTime":"2025-12-06T06:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.473190 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.473240 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.473251 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.473269 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.473279 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:02Z","lastTransitionTime":"2025-12-06T06:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.575656 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.575731 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.575743 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.575758 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.575769 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:02Z","lastTransitionTime":"2025-12-06T06:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.678743 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.678784 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.678795 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.678808 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.678817 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:02Z","lastTransitionTime":"2025-12-06T06:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.781121 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.781349 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.781693 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.781784 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.781859 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:02Z","lastTransitionTime":"2025-12-06T06:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.883869 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.884088 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.884203 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.884282 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.884350 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:02Z","lastTransitionTime":"2025-12-06T06:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.975741 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.975773 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.975782 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.975796 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.975804 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:02Z","lastTransitionTime":"2025-12-06T06:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:02 crc kubenswrapper[4823]: E1206 06:26:02.989212 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:02Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.992534 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.992561 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.992569 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.992581 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:02 crc kubenswrapper[4823]: I1206 06:26:02.992589 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:02Z","lastTransitionTime":"2025-12-06T06:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:03 crc kubenswrapper[4823]: E1206 06:26:03.003566 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:03Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.006748 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.006784 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.006793 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.006806 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.006816 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:03Z","lastTransitionTime":"2025-12-06T06:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:03 crc kubenswrapper[4823]: E1206 06:26:03.019259 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:03Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.023786 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.023817 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.023826 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.023840 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.023849 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:03Z","lastTransitionTime":"2025-12-06T06:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:03 crc kubenswrapper[4823]: E1206 06:26:03.035890 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:03Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.040070 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.040119 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.040129 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.040161 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.040173 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:03Z","lastTransitionTime":"2025-12-06T06:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:03 crc kubenswrapper[4823]: E1206 06:26:03.052342 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:03Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:03 crc kubenswrapper[4823]: E1206 06:26:03.052740 4823 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.054339 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.054382 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.054398 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.054418 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.054435 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:03Z","lastTransitionTime":"2025-12-06T06:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.140630 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:03 crc kubenswrapper[4823]: E1206 06:26:03.140792 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.140853 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:03 crc kubenswrapper[4823]: E1206 06:26:03.140918 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.141001 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:03 crc kubenswrapper[4823]: E1206 06:26:03.141228 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.157965 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.158025 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.158040 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.158062 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.158076 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:03Z","lastTransitionTime":"2025-12-06T06:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.261489 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.261534 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.261545 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.261561 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.261573 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:03Z","lastTransitionTime":"2025-12-06T06:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.364181 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.364250 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.364267 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.364285 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.364298 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:03Z","lastTransitionTime":"2025-12-06T06:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.466925 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.466959 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.466967 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.466981 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.466992 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:03Z","lastTransitionTime":"2025-12-06T06:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.569419 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.569457 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.569467 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.569485 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.569497 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:03Z","lastTransitionTime":"2025-12-06T06:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.674153 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.674414 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.674426 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.674443 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.674460 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:03Z","lastTransitionTime":"2025-12-06T06:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.777006 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.777064 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.777077 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.777094 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.777106 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:03Z","lastTransitionTime":"2025-12-06T06:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.890568 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.890628 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.890642 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.890682 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.890699 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:03Z","lastTransitionTime":"2025-12-06T06:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.992687 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.992724 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.992735 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.992751 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:03 crc kubenswrapper[4823]: I1206 06:26:03.992760 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:03Z","lastTransitionTime":"2025-12-06T06:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.095127 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.095172 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.095181 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.095211 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.095220 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:04Z","lastTransitionTime":"2025-12-06T06:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.139992 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:04 crc kubenswrapper[4823]: E1206 06:26:04.140199 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.197977 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.198067 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.198078 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.198092 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.198101 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:04Z","lastTransitionTime":"2025-12-06T06:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.299907 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.299937 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.299947 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.299963 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.299976 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:04Z","lastTransitionTime":"2025-12-06T06:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.402685 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.402713 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.402721 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.402736 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.402745 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:04Z","lastTransitionTime":"2025-12-06T06:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.504920 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.504948 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.504966 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.504979 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.504988 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:04Z","lastTransitionTime":"2025-12-06T06:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.607531 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.607919 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.607999 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.608083 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.608151 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:04Z","lastTransitionTime":"2025-12-06T06:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.710758 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.710801 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.710812 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.710830 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.710842 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:04Z","lastTransitionTime":"2025-12-06T06:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.815119 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.815162 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.815176 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.815193 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.815204 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:04Z","lastTransitionTime":"2025-12-06T06:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.917513 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.917831 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.917935 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.918011 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:04 crc kubenswrapper[4823]: I1206 06:26:04.918078 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:04Z","lastTransitionTime":"2025-12-06T06:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.020816 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.020884 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.020897 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.020933 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.020946 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:05Z","lastTransitionTime":"2025-12-06T06:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.123609 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.123878 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.123958 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.124032 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.124113 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:05Z","lastTransitionTime":"2025-12-06T06:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.140046 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.140106 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:05 crc kubenswrapper[4823]: E1206 06:26:05.140161 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:05 crc kubenswrapper[4823]: E1206 06:26:05.140227 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.140291 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:05 crc kubenswrapper[4823]: E1206 06:26:05.140349 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.226183 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.226399 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.226462 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.226566 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.226656 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:05Z","lastTransitionTime":"2025-12-06T06:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.328620 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.328675 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.328687 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.328702 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.328711 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:05Z","lastTransitionTime":"2025-12-06T06:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.430458 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.430502 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.430511 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.430525 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.430536 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:05Z","lastTransitionTime":"2025-12-06T06:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.532481 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.532518 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.532527 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.532538 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.532548 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:05Z","lastTransitionTime":"2025-12-06T06:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.635045 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.635093 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.635108 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.635125 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.635138 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:05Z","lastTransitionTime":"2025-12-06T06:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.737951 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.737993 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.738007 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.738023 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.738039 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:05Z","lastTransitionTime":"2025-12-06T06:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.840495 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.840551 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.840562 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.840580 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.840592 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:05Z","lastTransitionTime":"2025-12-06T06:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.942988 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.943024 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.943034 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.943048 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:05 crc kubenswrapper[4823]: I1206 06:26:05.943057 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:05Z","lastTransitionTime":"2025-12-06T06:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.045794 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.045837 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.045848 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.045865 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.045879 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:06Z","lastTransitionTime":"2025-12-06T06:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.140209 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:06 crc kubenswrapper[4823]: E1206 06:26:06.140340 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.147794 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.147824 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.147834 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.147846 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.147856 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:06Z","lastTransitionTime":"2025-12-06T06:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.250502 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.250558 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.250567 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.250581 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.250590 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:06Z","lastTransitionTime":"2025-12-06T06:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.352717 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.352751 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.352761 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.352775 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.352784 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:06Z","lastTransitionTime":"2025-12-06T06:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.455087 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.455117 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.455124 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.455137 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.455145 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:06Z","lastTransitionTime":"2025-12-06T06:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.558575 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.558628 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.558640 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.558676 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.558690 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:06Z","lastTransitionTime":"2025-12-06T06:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.669895 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.669944 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.669958 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.669976 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.669987 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:06Z","lastTransitionTime":"2025-12-06T06:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.772316 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.772350 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.772358 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.772372 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.772381 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:06Z","lastTransitionTime":"2025-12-06T06:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.874859 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.874896 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.874905 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.874921 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.874930 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:06Z","lastTransitionTime":"2025-12-06T06:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.977460 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.977499 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.977508 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.977521 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:06 crc kubenswrapper[4823]: I1206 06:26:06.977530 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:06Z","lastTransitionTime":"2025-12-06T06:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.080024 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.080071 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.080080 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.080094 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.080103 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:07Z","lastTransitionTime":"2025-12-06T06:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.140713 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.140768 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.140847 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:07 crc kubenswrapper[4823]: E1206 06:26:07.140962 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:07 crc kubenswrapper[4823]: E1206 06:26:07.141044 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:07 crc kubenswrapper[4823]: E1206 06:26:07.141504 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.141688 4823 scope.go:117] "RemoveContainer" containerID="b53bdf33d43a42fb1812b2e8970cf652a5058714f807b995377a85e222a77bea" Dec 06 06:26:07 crc kubenswrapper[4823]: E1206 06:26:07.141957 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rr4m5_openshift-ovn-kubernetes(d7a8c395-bca0-48a5-bb35-10e956e85a2a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.181844 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.181897 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.181911 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.181927 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.181938 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:07Z","lastTransitionTime":"2025-12-06T06:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.283941 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.283973 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.283983 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.283999 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.284009 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:07Z","lastTransitionTime":"2025-12-06T06:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.386587 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.386628 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.386682 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.386703 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.386714 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:07Z","lastTransitionTime":"2025-12-06T06:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.488813 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.488846 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.488856 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.488871 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.488882 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:07Z","lastTransitionTime":"2025-12-06T06:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.591111 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.591177 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.591187 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.591202 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.591213 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:07Z","lastTransitionTime":"2025-12-06T06:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.693481 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.693527 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.693539 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.693555 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.693565 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:07Z","lastTransitionTime":"2025-12-06T06:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.795468 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.795511 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.795522 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.795539 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.795549 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:07Z","lastTransitionTime":"2025-12-06T06:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.898002 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.898062 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.898075 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.898091 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:07 crc kubenswrapper[4823]: I1206 06:26:07.898103 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:07Z","lastTransitionTime":"2025-12-06T06:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.001215 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.001269 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.001278 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.001294 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.001304 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:08Z","lastTransitionTime":"2025-12-06T06:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.103284 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.103321 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.103330 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.103348 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.103356 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:08Z","lastTransitionTime":"2025-12-06T06:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.139888 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:08 crc kubenswrapper[4823]: E1206 06:26:08.140028 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.206033 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.206061 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.206069 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.206082 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.206093 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:08Z","lastTransitionTime":"2025-12-06T06:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.309869 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.309915 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.309925 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.309942 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.309953 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:08Z","lastTransitionTime":"2025-12-06T06:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.413949 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.414000 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.414012 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.414026 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.414039 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:08Z","lastTransitionTime":"2025-12-06T06:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.517197 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.518157 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.518176 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.518191 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.518204 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:08Z","lastTransitionTime":"2025-12-06T06:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.621050 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.621087 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.621099 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.621112 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.621121 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:08Z","lastTransitionTime":"2025-12-06T06:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.724644 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.724730 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.724742 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.724791 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.724803 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:08Z","lastTransitionTime":"2025-12-06T06:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.827044 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.827089 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.827100 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.827117 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.827146 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:08Z","lastTransitionTime":"2025-12-06T06:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.929002 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.929033 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.929041 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.929054 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:08 crc kubenswrapper[4823]: I1206 06:26:08.929064 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:08Z","lastTransitionTime":"2025-12-06T06:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.032141 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.032204 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.032213 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.032226 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.032254 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:09Z","lastTransitionTime":"2025-12-06T06:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.135268 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.135317 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.135329 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.135347 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.135358 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:09Z","lastTransitionTime":"2025-12-06T06:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.140571 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.140598 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.140693 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:09 crc kubenswrapper[4823]: E1206 06:26:09.140770 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:09 crc kubenswrapper[4823]: E1206 06:26:09.140848 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:09 crc kubenswrapper[4823]: E1206 06:26:09.140947 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.158102 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56968214a054fa7fc3abca868820f1cd7dcc4f3a7cb1150d5e2940588eb2ba3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.180827 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b53bdf33d43a42fb1812b2e8970cf652a5058714f807b995377a85e222a77bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b53bdf33d43a42fb1812b2e8970cf652a5058714f807b995377a85e222a77bea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:25:51Z\\\",\\\"message\\\":\\\"nt-go/informers/factory.go:160\\\\nI1206 06:25:51.303346 6479 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:25:51.303571 6479 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:25:51.303837 6479 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:25:51.304402 6479 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1206 06:25:51.304494 6479 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 06:25:51.304533 6479 factory.go:656] Stopping watch factory\\\\nI1206 06:25:51.304533 6479 handler.go:208] Removed *v1.Node event handler 2\\\\nI1206 06:25:51.304552 6479 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 06:25:51.322908 6479 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1206 06:25:51.322958 6479 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1206 06:25:51.323039 6479 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:25:51.323069 6479 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1206 06:25:51.323168 6479 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rr4m5_openshift-ovn-kubernetes(d7a8c395-bca0-48a5-bb35-10e956e85a2a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.195333 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.206725 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.219117 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.233498 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.237084 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.237110 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.237119 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.237132 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.237142 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:09Z","lastTransitionTime":"2025-12-06T06:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.244081 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c3e43f2-f912-4596-a7fa-e061dad8ad28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96825dd91cbf6e77075365211e7d310ec7f14d6e4045eff0195c70f2f6447185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dba2c0e710e8afd67e78b787d4caf972c1dbd9c20c7d4a263a6c104c7e07b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a277c59c5d4f466d6f64fd8243e1c2bdd0b10dafb7041876c073a8671bdcd4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f5f78905b2006625ba4f2b358eb6b341f8c89f7a3de175316f4609b35e86e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9f5f78905b2006625ba4f2b358eb6b341f8c89f7a3de175316f4609b35e86e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.254744 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.265596 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.275149 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.287171 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.299591 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4a571bc-1fba-4a48-b611-5c8d7f46d357\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee98d835469a8e1f219eb885362ddaf26d720cf7abd1d5643d860136e63b9d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ced9dcca911a59b1eb186462769dbf2016484f04083cc5e1139ee8ddbe472c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xbg5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.314003 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.327219 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.340208 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.340412 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.340473 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.340533 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.340611 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:09Z","lastTransitionTime":"2025-12-06T06:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.342493 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.354568 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57k6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2bb8a5-743e-42ed-9f30-850690a30e47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57k6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.367433 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:09Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.442925 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.442960 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.442969 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.442982 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.443013 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:09Z","lastTransitionTime":"2025-12-06T06:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.545512 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.546487 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.546582 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.546694 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.546781 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:09Z","lastTransitionTime":"2025-12-06T06:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.647799 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs\") pod \"network-metrics-daemon-57k6t\" (UID: \"5a2bb8a5-743e-42ed-9f30-850690a30e47\") " pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:09 crc kubenswrapper[4823]: E1206 06:26:09.647977 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:26:09 crc kubenswrapper[4823]: E1206 06:26:09.648041 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs podName:5a2bb8a5-743e-42ed-9f30-850690a30e47 nodeName:}" failed. No retries permitted until 2025-12-06 06:26:41.648022719 +0000 UTC m=+102.933774679 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs") pod "network-metrics-daemon-57k6t" (UID: "5a2bb8a5-743e-42ed-9f30-850690a30e47") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.648812 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.648859 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.648869 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.648887 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.648897 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:09Z","lastTransitionTime":"2025-12-06T06:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.750627 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.750676 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.750688 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.750704 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.750715 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:09Z","lastTransitionTime":"2025-12-06T06:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.853394 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.853449 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.853461 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.853477 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.853489 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:09Z","lastTransitionTime":"2025-12-06T06:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.955465 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.955503 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.955512 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.955526 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:09 crc kubenswrapper[4823]: I1206 06:26:09.955535 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:09Z","lastTransitionTime":"2025-12-06T06:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.057535 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.057571 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.057584 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.057602 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.057613 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:10Z","lastTransitionTime":"2025-12-06T06:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.140625 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:10 crc kubenswrapper[4823]: E1206 06:26:10.140836 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.160276 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.160318 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.160332 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.160351 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.160364 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:10Z","lastTransitionTime":"2025-12-06T06:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.263235 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.263266 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.263275 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.263299 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.263314 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:10Z","lastTransitionTime":"2025-12-06T06:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.365857 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.365895 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.365928 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.365953 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.365965 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:10Z","lastTransitionTime":"2025-12-06T06:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.469041 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.469078 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.469089 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.469106 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.469127 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:10Z","lastTransitionTime":"2025-12-06T06:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.555523 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bldh8_e2faf943-388e-4105-a30d-b0bbb041f8e0/kube-multus/0.log" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.555597 4823 generic.go:334] "Generic (PLEG): container finished" podID="e2faf943-388e-4105-a30d-b0bbb041f8e0" containerID="df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7" exitCode=1 Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.555649 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bldh8" event={"ID":"e2faf943-388e-4105-a30d-b0bbb041f8e0","Type":"ContainerDied","Data":"df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7"} Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.556123 4823 scope.go:117] "RemoveContainer" containerID="df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.570976 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.571006 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.571015 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.571028 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.571045 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:10Z","lastTransitionTime":"2025-12-06T06:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.573616 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:10Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.586046 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:10Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.595924 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:10Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.608300 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:10Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.619380 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4a571bc-1fba-4a48-b611-5c8d7f46d357\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee98d835469a8e1f219eb885362ddaf26d720cf7abd1d5643d860136e63b9d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ced9dcca911a59b1eb186462769dbf2016484f04083cc5e1139ee8ddbe472c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xbg5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:10Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.632688 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:10Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.643959 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c3e43f2-f912-4596-a7fa-e061dad8ad28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96825dd91cbf6e77075365211e7d310ec7f14d6e4045eff0195c70f2f6447185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dba2c0e710e8afd67e78b787d4caf972c1dbd9c20c7d4a263a6c104c7e07b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a277c59c5d4f466d6f64fd8243e1c2bdd0b10dafb7041876c073a8671bdcd4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f5f78905b2006625ba4f2b358eb6b341f8c89f7a3de175316f4609b35e86e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9f5f78905b2006625ba4f2b358eb6b341f8c89f7a3de175316f4609b35e86e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:10Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.656375 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:10Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.665991 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57k6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2bb8a5-743e-42ed-9f30-850690a30e47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57k6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:10Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.673477 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.673510 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.673523 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.673558 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.673569 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:10Z","lastTransitionTime":"2025-12-06T06:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.679289 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:10Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.689075 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:10Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.701210 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:10Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.713841 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:10Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.729685 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56968214a054fa7fc3abca868820f1cd7dcc4f3a7cb1150d5e2940588eb2ba3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:10Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.748436 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b53bdf33d43a42fb1812b2e8970cf652a5058714f807b995377a85e222a77bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b53bdf33d43a42fb1812b2e8970cf652a5058714f807b995377a85e222a77bea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:25:51Z\\\",\\\"message\\\":\\\"nt-go/informers/factory.go:160\\\\nI1206 06:25:51.303346 6479 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:25:51.303571 6479 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:25:51.303837 6479 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:25:51.304402 6479 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1206 06:25:51.304494 6479 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 06:25:51.304533 6479 factory.go:656] Stopping watch factory\\\\nI1206 06:25:51.304533 6479 handler.go:208] Removed *v1.Node event handler 2\\\\nI1206 06:25:51.304552 6479 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 06:25:51.322908 6479 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1206 06:25:51.322958 6479 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1206 06:25:51.323039 6479 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:25:51.323069 6479 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1206 06:25:51.323168 6479 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rr4m5_openshift-ovn-kubernetes(d7a8c395-bca0-48a5-bb35-10e956e85a2a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:10Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.759260 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:10Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.773630 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:26:10Z\\\",\\\"message\\\":\\\"2025-12-06T06:25:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e49c4b27-3b0d-453d-8711-4928a29d0376\\\\n2025-12-06T06:25:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e49c4b27-3b0d-453d-8711-4928a29d0376 to /host/opt/cni/bin/\\\\n2025-12-06T06:25:25Z [verbose] multus-daemon started\\\\n2025-12-06T06:25:25Z [verbose] Readiness Indicator file check\\\\n2025-12-06T06:26:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:10Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.776459 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.776486 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.776497 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.776518 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.776538 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:10Z","lastTransitionTime":"2025-12-06T06:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.878815 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.879050 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.879117 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.879203 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.879284 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:10Z","lastTransitionTime":"2025-12-06T06:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.981162 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.981205 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.981226 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.981246 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:10 crc kubenswrapper[4823]: I1206 06:26:10.981259 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:10Z","lastTransitionTime":"2025-12-06T06:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.083148 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.083197 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.083208 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.083224 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.083256 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:11Z","lastTransitionTime":"2025-12-06T06:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.140756 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.140801 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.140824 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:11 crc kubenswrapper[4823]: E1206 06:26:11.140927 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:11 crc kubenswrapper[4823]: E1206 06:26:11.141026 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:11 crc kubenswrapper[4823]: E1206 06:26:11.147960 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.185162 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.185392 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.185454 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.185515 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.185579 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:11Z","lastTransitionTime":"2025-12-06T06:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.288746 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.289073 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.289153 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.289220 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.289286 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:11Z","lastTransitionTime":"2025-12-06T06:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.392712 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.392753 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.392770 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.392788 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.392802 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:11Z","lastTransitionTime":"2025-12-06T06:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.495490 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.495799 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.495904 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.495984 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.496050 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:11Z","lastTransitionTime":"2025-12-06T06:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.559716 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bldh8_e2faf943-388e-4105-a30d-b0bbb041f8e0/kube-multus/0.log" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.560624 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bldh8" event={"ID":"e2faf943-388e-4105-a30d-b0bbb041f8e0","Type":"ContainerStarted","Data":"31fc1a3302d6dbc392cfb5425747a5c31475388f6af4c498ecc75f33ce7740b2"} Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.575087 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4a571bc-1fba-4a48-b611-5c8d7f46d357\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee98d835469a8e1f219eb885362ddaf26d720cf7abd1d5643d860136e63b9d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ced9dcca911a59b1eb186462769dbf2016484f04083cc5e1139ee8ddbe472c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xbg5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:11Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.591850 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:11Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.602874 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.602930 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.602944 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.602965 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.602978 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:11Z","lastTransitionTime":"2025-12-06T06:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.608965 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c3e43f2-f912-4596-a7fa-e061dad8ad28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96825dd91cbf6e77075365211e7d310ec7f14d6e4045eff0195c70f2f6447185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dba2c0e710e8afd67e78b787d4caf972c1dbd9c20c7d4a263a6c104c7e07b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a277c59c5d4f466d6f64fd8243e1c2bdd0b10dafb7041876c073a8671bdcd4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f5f78905b2006625ba4f2b358eb6b341f8c89f7a3de175316f4609b35e86e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9f5f78905b2006625ba4f2b358eb6b341f8c89f7a3de175316f4609b35e86e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:11Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.623040 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:11Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.637145 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:11Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.649230 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:11Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.662891 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:11Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.674771 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:11Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.686289 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:11Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.698631 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:11Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.707506 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.707540 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.707551 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.707568 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.707578 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:11Z","lastTransitionTime":"2025-12-06T06:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.710654 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:11Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.723444 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57k6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2bb8a5-743e-42ed-9f30-850690a30e47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57k6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:11Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.736500 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:11Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.753574 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56968214a054fa7fc3abca868820f1cd7dcc4f3a7cb1150d5e2940588eb2ba3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:11Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.773834 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b53bdf33d43a42fb1812b2e8970cf652a5058714f807b995377a85e222a77bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b53bdf33d43a42fb1812b2e8970cf652a5058714f807b995377a85e222a77bea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:25:51Z\\\",\\\"message\\\":\\\"nt-go/informers/factory.go:160\\\\nI1206 06:25:51.303346 6479 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:25:51.303571 6479 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:25:51.303837 6479 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:25:51.304402 6479 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1206 06:25:51.304494 6479 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 06:25:51.304533 6479 factory.go:656] Stopping watch factory\\\\nI1206 06:25:51.304533 6479 handler.go:208] Removed *v1.Node event handler 2\\\\nI1206 06:25:51.304552 6479 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 06:25:51.322908 6479 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1206 06:25:51.322958 6479 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1206 06:25:51.323039 6479 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:25:51.323069 6479 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1206 06:25:51.323168 6479 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rr4m5_openshift-ovn-kubernetes(d7a8c395-bca0-48a5-bb35-10e956e85a2a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:11Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.785331 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:11Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.798300 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31fc1a3302d6dbc392cfb5425747a5c31475388f6af4c498ecc75f33ce7740b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:26:10Z\\\",\\\"message\\\":\\\"2025-12-06T06:25:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e49c4b27-3b0d-453d-8711-4928a29d0376\\\\n2025-12-06T06:25:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e49c4b27-3b0d-453d-8711-4928a29d0376 to /host/opt/cni/bin/\\\\n2025-12-06T06:25:25Z [verbose] multus-daemon started\\\\n2025-12-06T06:25:25Z [verbose] Readiness Indicator file check\\\\n2025-12-06T06:26:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:11Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.810130 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.810200 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.810213 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.810250 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.810263 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:11Z","lastTransitionTime":"2025-12-06T06:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.913436 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.913477 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.913488 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.913514 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:11 crc kubenswrapper[4823]: I1206 06:26:11.913537 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:11Z","lastTransitionTime":"2025-12-06T06:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.015520 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.015574 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.015587 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.015600 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.015610 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:12Z","lastTransitionTime":"2025-12-06T06:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.118298 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.118346 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.118357 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.118374 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.118386 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:12Z","lastTransitionTime":"2025-12-06T06:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.139713 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:12 crc kubenswrapper[4823]: E1206 06:26:12.139853 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.221005 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.221047 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.221060 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.221076 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.221088 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:12Z","lastTransitionTime":"2025-12-06T06:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.323871 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.323910 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.323922 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.323938 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.323952 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:12Z","lastTransitionTime":"2025-12-06T06:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.428019 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.428073 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.428087 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.428110 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.428122 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:12Z","lastTransitionTime":"2025-12-06T06:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.531430 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.531467 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.531477 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.531499 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.531515 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:12Z","lastTransitionTime":"2025-12-06T06:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.634698 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.634738 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.634749 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.634765 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.634777 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:12Z","lastTransitionTime":"2025-12-06T06:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.737138 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.737185 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.737196 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.737211 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.737221 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:12Z","lastTransitionTime":"2025-12-06T06:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.840113 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.840157 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.840210 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.840228 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.840240 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:12Z","lastTransitionTime":"2025-12-06T06:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.942986 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.943033 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.943045 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.943062 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:12 crc kubenswrapper[4823]: I1206 06:26:12.943076 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:12Z","lastTransitionTime":"2025-12-06T06:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.045622 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.045712 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.045725 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.045742 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.045754 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:13Z","lastTransitionTime":"2025-12-06T06:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.140897 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.141147 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.141284 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:13 crc kubenswrapper[4823]: E1206 06:26:13.141404 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:13 crc kubenswrapper[4823]: E1206 06:26:13.141558 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:13 crc kubenswrapper[4823]: E1206 06:26:13.143778 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.147889 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.147925 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.147934 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.147947 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.147960 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:13Z","lastTransitionTime":"2025-12-06T06:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.157383 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.250317 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.250351 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.250360 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.250375 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.250386 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:13Z","lastTransitionTime":"2025-12-06T06:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.268002 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.268041 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.268050 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.268065 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.268075 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:13Z","lastTransitionTime":"2025-12-06T06:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:13 crc kubenswrapper[4823]: E1206 06:26:13.280534 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:13Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.285845 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.285887 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.285900 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.285919 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.285934 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:13Z","lastTransitionTime":"2025-12-06T06:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:13 crc kubenswrapper[4823]: E1206 06:26:13.297401 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:13Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.301228 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.301511 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.301624 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.301767 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.301878 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:13Z","lastTransitionTime":"2025-12-06T06:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:13 crc kubenswrapper[4823]: E1206 06:26:13.316454 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:13Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.320924 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.321337 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.321444 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.321549 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.321627 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:13Z","lastTransitionTime":"2025-12-06T06:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:13 crc kubenswrapper[4823]: E1206 06:26:13.333146 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:13Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.336215 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.336255 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.336267 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.336284 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.336297 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:13Z","lastTransitionTime":"2025-12-06T06:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:13 crc kubenswrapper[4823]: E1206 06:26:13.349176 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:13Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:13 crc kubenswrapper[4823]: E1206 06:26:13.349287 4823 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.352829 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.352865 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.352877 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.352891 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.352901 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:13Z","lastTransitionTime":"2025-12-06T06:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.456916 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.457351 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.457423 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.457521 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.457608 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:13Z","lastTransitionTime":"2025-12-06T06:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.561165 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.561208 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.561218 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.561235 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.561248 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:13Z","lastTransitionTime":"2025-12-06T06:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.664725 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.664782 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.664792 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.664811 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.664827 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:13Z","lastTransitionTime":"2025-12-06T06:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.767957 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.768009 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.768020 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.768040 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.768053 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:13Z","lastTransitionTime":"2025-12-06T06:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.870984 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.871065 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.871076 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.871099 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.871125 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:13Z","lastTransitionTime":"2025-12-06T06:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.974158 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.974209 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.974222 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.974240 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:13 crc kubenswrapper[4823]: I1206 06:26:13.974252 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:13Z","lastTransitionTime":"2025-12-06T06:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.077212 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.077261 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.077271 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.077331 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.077344 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:14Z","lastTransitionTime":"2025-12-06T06:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.140155 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:14 crc kubenswrapper[4823]: E1206 06:26:14.140400 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.180294 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.180343 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.180355 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.180374 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.180407 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:14Z","lastTransitionTime":"2025-12-06T06:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.283038 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.283079 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.283089 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.283105 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.283116 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:14Z","lastTransitionTime":"2025-12-06T06:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.385190 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.385215 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.385223 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.385234 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.385242 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:14Z","lastTransitionTime":"2025-12-06T06:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.487923 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.487959 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.487975 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.487990 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.488000 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:14Z","lastTransitionTime":"2025-12-06T06:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.590923 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.590986 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.590997 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.591012 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.591024 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:14Z","lastTransitionTime":"2025-12-06T06:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.693290 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.693336 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.693348 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.693363 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.693374 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:14Z","lastTransitionTime":"2025-12-06T06:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.796092 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.796142 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.796153 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.796167 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.796177 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:14Z","lastTransitionTime":"2025-12-06T06:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.899034 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.899090 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.899101 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.899117 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:14 crc kubenswrapper[4823]: I1206 06:26:14.899128 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:14Z","lastTransitionTime":"2025-12-06T06:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.001502 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.001573 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.001587 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.001629 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.001644 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:15Z","lastTransitionTime":"2025-12-06T06:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.104477 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.104506 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.104514 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.104527 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.104535 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:15Z","lastTransitionTime":"2025-12-06T06:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.139781 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.139783 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:15 crc kubenswrapper[4823]: E1206 06:26:15.139940 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:15 crc kubenswrapper[4823]: E1206 06:26:15.139990 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.139804 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:15 crc kubenswrapper[4823]: E1206 06:26:15.140075 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.207293 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.207359 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.207371 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.207396 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.207421 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:15Z","lastTransitionTime":"2025-12-06T06:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.309838 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.309897 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.309908 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.309923 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.309934 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:15Z","lastTransitionTime":"2025-12-06T06:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.412720 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.412786 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.412798 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.412819 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.412831 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:15Z","lastTransitionTime":"2025-12-06T06:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.516000 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.516036 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.516046 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.516062 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.516072 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:15Z","lastTransitionTime":"2025-12-06T06:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.618877 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.618924 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.618934 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.618953 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.618966 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:15Z","lastTransitionTime":"2025-12-06T06:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.722205 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.722267 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.722281 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.722300 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.722314 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:15Z","lastTransitionTime":"2025-12-06T06:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.825810 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.825862 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.825875 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.825895 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.825910 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:15Z","lastTransitionTime":"2025-12-06T06:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.929230 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.929270 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.929280 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.929295 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:15 crc kubenswrapper[4823]: I1206 06:26:15.929305 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:15Z","lastTransitionTime":"2025-12-06T06:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.032854 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.032894 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.032904 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.032917 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.032927 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:16Z","lastTransitionTime":"2025-12-06T06:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.135334 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.135366 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.135376 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.135391 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.135400 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:16Z","lastTransitionTime":"2025-12-06T06:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.139688 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:16 crc kubenswrapper[4823]: E1206 06:26:16.139789 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.238917 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.238989 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.239004 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.239023 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.239035 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:16Z","lastTransitionTime":"2025-12-06T06:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.341625 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.341694 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.341704 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.341719 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.341734 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:16Z","lastTransitionTime":"2025-12-06T06:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.444808 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.444846 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.444855 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.444870 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.444882 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:16Z","lastTransitionTime":"2025-12-06T06:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.547491 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.547538 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.547548 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.547566 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.547578 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:16Z","lastTransitionTime":"2025-12-06T06:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.650231 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.650302 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.650313 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.650345 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.650355 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:16Z","lastTransitionTime":"2025-12-06T06:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.752389 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.752431 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.752440 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.752453 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.752462 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:16Z","lastTransitionTime":"2025-12-06T06:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.854708 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.854745 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.854754 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.854768 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.854778 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:16Z","lastTransitionTime":"2025-12-06T06:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.957547 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.957604 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.957621 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.957638 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:16 crc kubenswrapper[4823]: I1206 06:26:16.957648 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:16Z","lastTransitionTime":"2025-12-06T06:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.059965 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.060014 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.060024 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.060040 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.060051 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:17Z","lastTransitionTime":"2025-12-06T06:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.139770 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.139831 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.139871 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:17 crc kubenswrapper[4823]: E1206 06:26:17.139947 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:17 crc kubenswrapper[4823]: E1206 06:26:17.139888 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:17 crc kubenswrapper[4823]: E1206 06:26:17.140069 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.162874 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.162953 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.162968 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.162989 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.163002 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:17Z","lastTransitionTime":"2025-12-06T06:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.265593 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.265623 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.265633 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.265645 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.265654 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:17Z","lastTransitionTime":"2025-12-06T06:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.368783 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.368839 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.368852 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.368869 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.368881 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:17Z","lastTransitionTime":"2025-12-06T06:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.472044 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.472115 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.472130 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.472158 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.472178 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:17Z","lastTransitionTime":"2025-12-06T06:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.574564 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.574635 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.574680 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.574706 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.574720 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:17Z","lastTransitionTime":"2025-12-06T06:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.677475 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.677525 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.677535 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.677550 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.677610 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:17Z","lastTransitionTime":"2025-12-06T06:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.780975 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.781025 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.781034 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.781056 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.781070 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:17Z","lastTransitionTime":"2025-12-06T06:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.884563 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.884631 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.884647 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.884687 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.884702 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:17Z","lastTransitionTime":"2025-12-06T06:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.987032 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.987125 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.987136 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.987151 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:17 crc kubenswrapper[4823]: I1206 06:26:17.987163 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:17Z","lastTransitionTime":"2025-12-06T06:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.089777 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.089817 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.089828 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.089843 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.089852 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:18Z","lastTransitionTime":"2025-12-06T06:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.140138 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:18 crc kubenswrapper[4823]: E1206 06:26:18.140269 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.192320 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.192361 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.192372 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.192417 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.192449 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:18Z","lastTransitionTime":"2025-12-06T06:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.294407 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.294447 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.294459 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.294479 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.294492 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:18Z","lastTransitionTime":"2025-12-06T06:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.396694 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.396727 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.396736 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.396749 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.396759 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:18Z","lastTransitionTime":"2025-12-06T06:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.498556 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.498593 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.498601 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.498616 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.498627 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:18Z","lastTransitionTime":"2025-12-06T06:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.601171 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.601210 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.601219 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.601235 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.601245 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:18Z","lastTransitionTime":"2025-12-06T06:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.703938 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.703985 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.704000 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.704017 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.704027 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:18Z","lastTransitionTime":"2025-12-06T06:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.806158 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.806205 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.806219 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.806237 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.806248 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:18Z","lastTransitionTime":"2025-12-06T06:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.908291 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.908332 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.908344 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.908369 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:18 crc kubenswrapper[4823]: I1206 06:26:18.908380 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:18Z","lastTransitionTime":"2025-12-06T06:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.010877 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.010922 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.010932 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.010949 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.010959 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:19Z","lastTransitionTime":"2025-12-06T06:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.112703 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.112744 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.112753 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.112766 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.112775 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:19Z","lastTransitionTime":"2025-12-06T06:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.140894 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.140925 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:19 crc kubenswrapper[4823]: E1206 06:26:19.141007 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.140895 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:19 crc kubenswrapper[4823]: E1206 06:26:19.141105 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:19 crc kubenswrapper[4823]: E1206 06:26:19.141165 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.142182 4823 scope.go:117] "RemoveContainer" containerID="b53bdf33d43a42fb1812b2e8970cf652a5058714f807b995377a85e222a77bea" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.153105 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de0571-0f46-4710-be19-18dd63ddd1d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dddf8f5ca6bb9db03f8bca5c6dcdc673b2038b8e45de295442962742b37ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2dfda516e8235398208f69d2b7956f835261cc7f3211a81d9bda3d4e46a827c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2dfda516e8235398208f69d2b7956f835261cc7f3211a81d9bda3d4e46a827c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.169124 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.181746 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c3e43f2-f912-4596-a7fa-e061dad8ad28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96825dd91cbf6e77075365211e7d310ec7f14d6e4045eff0195c70f2f6447185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dba2c0e710e8afd67e78b787d4caf972c1dbd9c20c7d4a263a6c104c7e07b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a277c59c5d4f466d6f64fd8243e1c2bdd0b10dafb7041876c073a8671bdcd4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f5f78905b2006625ba4f2b358eb6b341f8c89f7a3de175316f4609b35e86e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9f5f78905b2006625ba4f2b358eb6b341f8c89f7a3de175316f4609b35e86e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.194162 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.206605 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.215718 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.215778 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.215789 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.215804 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.215814 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:19Z","lastTransitionTime":"2025-12-06T06:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.216422 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.226806 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.237918 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4a571bc-1fba-4a48-b611-5c8d7f46d357\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee98d835469a8e1f219eb885362ddaf26d720cf7abd1d5643d860136e63b9d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ced9dcca911a59b1eb186462769dbf2016484f04083cc5e1139ee8ddbe472c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xbg5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.251987 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.266093 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.280403 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.292654 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.301856 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57k6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2bb8a5-743e-42ed-9f30-850690a30e47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57k6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.314818 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.319144 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.319185 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.319196 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.319212 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.319225 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:19Z","lastTransitionTime":"2025-12-06T06:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.328825 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56968214a054fa7fc3abca868820f1cd7dcc4f3a7cb1150d5e2940588eb2ba3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.347025 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b53bdf33d43a42fb1812b2e8970cf652a5058714f807b995377a85e222a77bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b53bdf33d43a42fb1812b2e8970cf652a5058714f807b995377a85e222a77bea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:25:51Z\\\",\\\"message\\\":\\\"nt-go/informers/factory.go:160\\\\nI1206 06:25:51.303346 6479 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:25:51.303571 6479 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:25:51.303837 6479 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:25:51.304402 6479 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1206 06:25:51.304494 6479 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 06:25:51.304533 6479 factory.go:656] Stopping watch factory\\\\nI1206 06:25:51.304533 6479 handler.go:208] Removed *v1.Node event handler 2\\\\nI1206 06:25:51.304552 6479 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 06:25:51.322908 6479 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1206 06:25:51.322958 6479 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1206 06:25:51.323039 6479 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:25:51.323069 6479 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1206 06:25:51.323168 6479 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rr4m5_openshift-ovn-kubernetes(d7a8c395-bca0-48a5-bb35-10e956e85a2a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.357121 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.369491 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31fc1a3302d6dbc392cfb5425747a5c31475388f6af4c498ecc75f33ce7740b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:26:10Z\\\",\\\"message\\\":\\\"2025-12-06T06:25:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e49c4b27-3b0d-453d-8711-4928a29d0376\\\\n2025-12-06T06:25:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e49c4b27-3b0d-453d-8711-4928a29d0376 to /host/opt/cni/bin/\\\\n2025-12-06T06:25:25Z [verbose] multus-daemon started\\\\n2025-12-06T06:25:25Z [verbose] Readiness Indicator file check\\\\n2025-12-06T06:26:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:19Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.421051 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.421089 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.421103 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.421120 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.421130 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:19Z","lastTransitionTime":"2025-12-06T06:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.524041 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.524085 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.524097 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.524116 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.524126 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:19Z","lastTransitionTime":"2025-12-06T06:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.626328 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.626412 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.626425 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.626441 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.626452 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:19Z","lastTransitionTime":"2025-12-06T06:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.728983 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.729042 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.729059 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.729077 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.729087 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:19Z","lastTransitionTime":"2025-12-06T06:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.833102 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.833138 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.833147 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.833161 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.833170 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:19Z","lastTransitionTime":"2025-12-06T06:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.935435 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.935468 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.935477 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.935491 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:19 crc kubenswrapper[4823]: I1206 06:26:19.935500 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:19Z","lastTransitionTime":"2025-12-06T06:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.038042 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.038080 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.038093 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.038108 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.038118 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:20Z","lastTransitionTime":"2025-12-06T06:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.139719 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:20 crc kubenswrapper[4823]: E1206 06:26:20.139838 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.140008 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.140031 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.140041 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.140051 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.140059 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:20Z","lastTransitionTime":"2025-12-06T06:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.242824 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.242866 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.242877 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.242892 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.242902 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:20Z","lastTransitionTime":"2025-12-06T06:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.346735 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.346787 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.346799 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.346817 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.346828 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:20Z","lastTransitionTime":"2025-12-06T06:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.449604 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.449653 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.449686 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.449709 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.449725 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:20Z","lastTransitionTime":"2025-12-06T06:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.552725 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.552803 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.552816 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.552839 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.552853 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:20Z","lastTransitionTime":"2025-12-06T06:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.589086 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rr4m5_d7a8c395-bca0-48a5-bb35-10e956e85a2a/ovnkube-controller/2.log" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.598364 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" event={"ID":"d7a8c395-bca0-48a5-bb35-10e956e85a2a","Type":"ContainerStarted","Data":"40de68b30aaf6ce3782c5d327a806ff7e1645ac533fc11832388b550a9fd3726"} Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.598949 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.612629 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.625001 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.635386 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57k6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2bb8a5-743e-42ed-9f30-850690a30e47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57k6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.647653 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.655762 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.655788 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.655796 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.655809 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.655818 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:20Z","lastTransitionTime":"2025-12-06T06:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.659248 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.676627 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40de68b30aaf6ce3782c5d327a806ff7e1645ac533fc11832388b550a9fd3726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b53bdf33d43a42fb1812b2e8970cf652a5058714f807b995377a85e222a77bea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:25:51Z\\\",\\\"message\\\":\\\"nt-go/informers/factory.go:160\\\\nI1206 06:25:51.303346 6479 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:25:51.303571 6479 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:25:51.303837 6479 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:25:51.304402 6479 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1206 06:25:51.304494 6479 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 06:25:51.304533 6479 factory.go:656] Stopping watch factory\\\\nI1206 06:25:51.304533 6479 handler.go:208] Removed *v1.Node event handler 2\\\\nI1206 06:25:51.304552 6479 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 06:25:51.322908 6479 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1206 06:25:51.322958 6479 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1206 06:25:51.323039 6479 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:25:51.323069 6479 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1206 06:25:51.323168 6479 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:26:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.691735 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.712527 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56968214a054fa7fc3abca868820f1cd7dcc4f3a7cb1150d5e2940588eb2ba3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.725073 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31fc1a3302d6dbc392cfb5425747a5c31475388f6af4c498ecc75f33ce7740b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:26:10Z\\\",\\\"message\\\":\\\"2025-12-06T06:25:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e49c4b27-3b0d-453d-8711-4928a29d0376\\\\n2025-12-06T06:25:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e49c4b27-3b0d-453d-8711-4928a29d0376 to /host/opt/cni/bin/\\\\n2025-12-06T06:25:25Z [verbose] multus-daemon started\\\\n2025-12-06T06:25:25Z [verbose] Readiness Indicator file check\\\\n2025-12-06T06:26:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.735859 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.747491 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c3e43f2-f912-4596-a7fa-e061dad8ad28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96825dd91cbf6e77075365211e7d310ec7f14d6e4045eff0195c70f2f6447185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dba2c0e710e8afd67e78b787d4caf972c1dbd9c20c7d4a263a6c104c7e07b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a277c59c5d4f466d6f64fd8243e1c2bdd0b10dafb7041876c073a8671bdcd4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f5f78905b2006625ba4f2b358eb6b341f8c89f7a3de175316f4609b35e86e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9f5f78905b2006625ba4f2b358eb6b341f8c89f7a3de175316f4609b35e86e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.758505 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.758562 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.758572 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.758587 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.758597 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:20Z","lastTransitionTime":"2025-12-06T06:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.759514 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.773902 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.785958 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.797243 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.808209 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4a571bc-1fba-4a48-b611-5c8d7f46d357\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee98d835469a8e1f219eb885362ddaf26d720cf7abd1d5643d860136e63b9d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ced9dcca911a59b1eb186462769dbf2016484f04083cc5e1139ee8ddbe472c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xbg5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.818256 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de0571-0f46-4710-be19-18dd63ddd1d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dddf8f5ca6bb9db03f8bca5c6dcdc673b2038b8e45de295442962742b37ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2dfda516e8235398208f69d2b7956f835261cc7f3211a81d9bda3d4e46a827c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2dfda516e8235398208f69d2b7956f835261cc7f3211a81d9bda3d4e46a827c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.830095 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:20Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.860851 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.860904 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.860914 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.860931 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.860957 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:20Z","lastTransitionTime":"2025-12-06T06:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.963438 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.963478 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.963490 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.963505 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:20 crc kubenswrapper[4823]: I1206 06:26:20.963516 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:20Z","lastTransitionTime":"2025-12-06T06:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.065386 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.065424 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.065434 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.065447 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.065461 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:21Z","lastTransitionTime":"2025-12-06T06:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.140414 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.140450 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.140431 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:21 crc kubenswrapper[4823]: E1206 06:26:21.140611 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:21 crc kubenswrapper[4823]: E1206 06:26:21.140550 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:21 crc kubenswrapper[4823]: E1206 06:26:21.140733 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.168018 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.168051 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.168060 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.168072 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.168080 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:21Z","lastTransitionTime":"2025-12-06T06:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.270393 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.270425 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.270436 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.270452 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.270465 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:21Z","lastTransitionTime":"2025-12-06T06:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.373155 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.373214 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.373226 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.373248 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.373259 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:21Z","lastTransitionTime":"2025-12-06T06:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.476184 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.476237 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.476248 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.476264 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.476274 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:21Z","lastTransitionTime":"2025-12-06T06:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.578894 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.578935 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.578945 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.578959 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.578969 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:21Z","lastTransitionTime":"2025-12-06T06:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.602416 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rr4m5_d7a8c395-bca0-48a5-bb35-10e956e85a2a/ovnkube-controller/3.log" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.602933 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rr4m5_d7a8c395-bca0-48a5-bb35-10e956e85a2a/ovnkube-controller/2.log" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.605311 4823 generic.go:334] "Generic (PLEG): container finished" podID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerID="40de68b30aaf6ce3782c5d327a806ff7e1645ac533fc11832388b550a9fd3726" exitCode=1 Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.605351 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" event={"ID":"d7a8c395-bca0-48a5-bb35-10e956e85a2a","Type":"ContainerDied","Data":"40de68b30aaf6ce3782c5d327a806ff7e1645ac533fc11832388b550a9fd3726"} Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.605388 4823 scope.go:117] "RemoveContainer" containerID="b53bdf33d43a42fb1812b2e8970cf652a5058714f807b995377a85e222a77bea" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.605927 4823 scope.go:117] "RemoveContainer" containerID="40de68b30aaf6ce3782c5d327a806ff7e1645ac533fc11832388b550a9fd3726" Dec 06 06:26:21 crc kubenswrapper[4823]: E1206 06:26:21.606085 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rr4m5_openshift-ovn-kubernetes(d7a8c395-bca0-48a5-bb35-10e956e85a2a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.618387 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.629428 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c3e43f2-f912-4596-a7fa-e061dad8ad28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96825dd91cbf6e77075365211e7d310ec7f14d6e4045eff0195c70f2f6447185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dba2c0e710e8afd67e78b787d4caf972c1dbd9c20c7d4a263a6c104c7e07b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a277c59c5d4f466d6f64fd8243e1c2bdd0b10dafb7041876c073a8671bdcd4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f5f78905b2006625ba4f2b358eb6b341f8c89f7a3de175316f4609b35e86e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9f5f78905b2006625ba4f2b358eb6b341f8c89f7a3de175316f4609b35e86e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.642929 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.654285 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.666078 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.679237 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.682111 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.682172 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.682186 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.682203 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.682215 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:21Z","lastTransitionTime":"2025-12-06T06:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.686564 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:26:21 crc kubenswrapper[4823]: E1206 06:26:21.686747 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:25.686716146 +0000 UTC m=+146.972468126 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.686845 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.686889 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.686965 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.687008 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:21 crc kubenswrapper[4823]: E1206 06:26:21.687049 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:26:21 crc kubenswrapper[4823]: E1206 06:26:21.687098 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:27:25.687089087 +0000 UTC m=+146.972841047 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 06:26:21 crc kubenswrapper[4823]: E1206 06:26:21.687120 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:26:21 crc kubenswrapper[4823]: E1206 06:26:21.687136 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:26:21 crc kubenswrapper[4823]: E1206 06:26:21.687146 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:26:21 crc kubenswrapper[4823]: E1206 06:26:21.687181 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 06:27:25.687169389 +0000 UTC m=+146.972921349 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:26:21 crc kubenswrapper[4823]: E1206 06:26:21.687345 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 06:26:21 crc kubenswrapper[4823]: E1206 06:26:21.687409 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 06:26:21 crc kubenswrapper[4823]: E1206 06:26:21.687425 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:26:21 crc kubenswrapper[4823]: E1206 06:26:21.687344 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:26:21 crc kubenswrapper[4823]: E1206 06:26:21.687504 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 06:27:25.687492289 +0000 UTC m=+146.973244249 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 06:26:21 crc kubenswrapper[4823]: E1206 06:26:21.687524 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 06:27:25.68751747 +0000 UTC m=+146.973269430 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.694887 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4a571bc-1fba-4a48-b611-5c8d7f46d357\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee98d835469a8e1f219eb885362ddaf26d720cf7abd1d5643d860136e63b9d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ced9dcca911a59b1eb186462769dbf2016484f04083cc5e1139ee8ddbe472c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xbg5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.707230 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de0571-0f46-4710-be19-18dd63ddd1d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dddf8f5ca6bb9db03f8bca5c6dcdc673b2038b8e45de295442962742b37ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2dfda516e8235398208f69d2b7956f835261cc7f3211a81d9bda3d4e46a827c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2dfda516e8235398208f69d2b7956f835261cc7f3211a81d9bda3d4e46a827c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.721930 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.736252 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.749022 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.759246 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57k6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2bb8a5-743e-42ed-9f30-850690a30e47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57k6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.770914 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.784018 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56968214a054fa7fc3abca868820f1cd7dcc4f3a7cb1150d5e2940588eb2ba3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.785613 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.785642 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.785652 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.785681 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.785697 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:21Z","lastTransitionTime":"2025-12-06T06:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.801835 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40de68b30aaf6ce3782c5d327a806ff7e1645ac533fc11832388b550a9fd3726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b53bdf33d43a42fb1812b2e8970cf652a5058714f807b995377a85e222a77bea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:25:51Z\\\",\\\"message\\\":\\\"nt-go/informers/factory.go:160\\\\nI1206 06:25:51.303346 6479 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:25:51.303571 6479 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:25:51.303837 6479 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:25:51.304402 6479 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1206 06:25:51.304494 6479 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 06:25:51.304533 6479 factory.go:656] Stopping watch factory\\\\nI1206 06:25:51.304533 6479 handler.go:208] Removed *v1.Node event handler 2\\\\nI1206 06:25:51.304552 6479 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 06:25:51.322908 6479 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1206 06:25:51.322958 6479 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1206 06:25:51.323039 6479 ovnkube.go:599] Stopped ovnkube\\\\nI1206 06:25:51.323069 6479 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1206 06:25:51.323168 6479 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40de68b30aaf6ce3782c5d327a806ff7e1645ac533fc11832388b550a9fd3726\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:26:20Z\\\",\\\"message\\\":\\\"1206 06:26:20.474614 6852 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 06:26:20.474642 6852 factory.go:656] Stopping watch factory\\\\nI1206 06:26:20.474920 6852 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:26:20.475020 6852 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 06:26:20.475039 6852 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1206 06:26:20.475214 6852 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:26:20.475535 6852 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:26:20.476042 6852 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:26:20.476449 6852 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:26:20.476846 6852 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:26:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.813170 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.822817 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.834197 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31fc1a3302d6dbc392cfb5425747a5c31475388f6af4c498ecc75f33ce7740b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:26:10Z\\\",\\\"message\\\":\\\"2025-12-06T06:25:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e49c4b27-3b0d-453d-8711-4928a29d0376\\\\n2025-12-06T06:25:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e49c4b27-3b0d-453d-8711-4928a29d0376 to /host/opt/cni/bin/\\\\n2025-12-06T06:25:25Z [verbose] multus-daemon started\\\\n2025-12-06T06:25:25Z [verbose] Readiness Indicator file check\\\\n2025-12-06T06:26:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:21Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.888297 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.888363 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.888376 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.888394 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.888405 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:21Z","lastTransitionTime":"2025-12-06T06:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.990481 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.990523 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.990531 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.990547 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:21 crc kubenswrapper[4823]: I1206 06:26:21.990557 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:21Z","lastTransitionTime":"2025-12-06T06:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.092302 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.092339 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.092348 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.092362 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.092372 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:22Z","lastTransitionTime":"2025-12-06T06:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.140350 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:22 crc kubenswrapper[4823]: E1206 06:26:22.140761 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.194463 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.194501 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.194509 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.194530 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.194541 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:22Z","lastTransitionTime":"2025-12-06T06:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.297812 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.297860 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.297875 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.297892 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.297903 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:22Z","lastTransitionTime":"2025-12-06T06:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.400174 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.400215 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.400239 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.400256 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.400267 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:22Z","lastTransitionTime":"2025-12-06T06:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.502277 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.502317 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.502328 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.502343 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.502355 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:22Z","lastTransitionTime":"2025-12-06T06:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.604410 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.604461 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.604475 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.604495 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.604507 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:22Z","lastTransitionTime":"2025-12-06T06:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.609550 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rr4m5_d7a8c395-bca0-48a5-bb35-10e956e85a2a/ovnkube-controller/3.log" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.612942 4823 scope.go:117] "RemoveContainer" containerID="40de68b30aaf6ce3782c5d327a806ff7e1645ac533fc11832388b550a9fd3726" Dec 06 06:26:22 crc kubenswrapper[4823]: E1206 06:26:22.613181 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rr4m5_openshift-ovn-kubernetes(d7a8c395-bca0-48a5-bb35-10e956e85a2a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.626905 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mv8th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42caff9-cbd1-4b1f-91ca-51651adc4a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caeb757c2b86e3259e55a1f7d2ee6a2f67bdf22f5053922faa1ebfea41bdda8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mv8th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:22Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.639008 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bldh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2faf943-388e-4105-a30d-b0bbb041f8e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31fc1a3302d6dbc392cfb5425747a5c31475388f6af4c498ecc75f33ce7740b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:26:10Z\\\",\\\"message\\\":\\\"2025-12-06T06:25:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e49c4b27-3b0d-453d-8711-4928a29d0376\\\\n2025-12-06T06:25:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e49c4b27-3b0d-453d-8711-4928a29d0376 to /host/opt/cni/bin/\\\\n2025-12-06T06:25:25Z [verbose] multus-daemon started\\\\n2025-12-06T06:25:25Z [verbose] Readiness Indicator file check\\\\n2025-12-06T06:26:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w696\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bldh8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:22Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.651227 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4h4hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"026a8135-2818-40fa-b269-4ea047404758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340dab513a5ea62c07edaa850af6ec663d95d5670aa166104aa43798e7f86671\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl5g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4h4hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:22Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.667597 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d0518f-7105-49e1-b537-f4de7b8f9a14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0559376ec28deb68fff383aba017461ea1393c5c093af5c89171e4142e73d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbscc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7wlj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:22Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.681301 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4a571bc-1fba-4a48-b611-5c8d7f46d357\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee98d835469a8e1f219eb885362ddaf26d720cf7abd1d5643d860136e63b9d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ced9dcca911a59b1eb186462769dbf2016484f04083cc5e1139ee8ddbe472c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlgkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xbg5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:22Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.692951 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de0571-0f46-4710-be19-18dd63ddd1d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dddf8f5ca6bb9db03f8bca5c6dcdc673b2038b8e45de295442962742b37ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2dfda516e8235398208f69d2b7956f835261cc7f3211a81d9bda3d4e46a827c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2dfda516e8235398208f69d2b7956f835261cc7f3211a81d9bda3d4e46a827c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:22Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.708267 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08a8d6f7-1e5f-4fdd-a613-736390c1593f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1206 06:25:11.977606 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 06:25:11.978741 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3021390485/tls.crt::/tmp/serving-cert-3021390485/tls.key\\\\\\\"\\\\nI1206 06:25:17.729749 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 06:25:17.734303 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 06:25:17.734326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 06:25:17.734370 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 06:25:17.734377 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 06:25:17.739894 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1206 06:25:17.739922 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739926 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 06:25:17.739930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 06:25:17.739933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 06:25:17.739935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 06:25:17.739938 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1206 06:25:17.740150 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1206 06:25:17.741803 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:22Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.710100 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.710135 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.710147 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.710165 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.710180 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:22Z","lastTransitionTime":"2025-12-06T06:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.722808 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c3e43f2-f912-4596-a7fa-e061dad8ad28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96825dd91cbf6e77075365211e7d310ec7f14d6e4045eff0195c70f2f6447185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2dba2c0e710e8afd67e78b787d4caf972c1dbd9c20c7d4a263a6c104c7e07b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a277c59c5d4f466d6f64fd8243e1c2bdd0b10dafb7041876c073a8671bdcd4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f5f78905b2006625ba4f2b358eb6b341f8c89f7a3de175316f4609b35e86e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9f5f78905b2006625ba4f2b358eb6b341f8c89f7a3de175316f4609b35e86e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:22Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.737846 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:22Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.752949 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c1bada051698ab40e822a6a3f5a11044dce74b01acf025809c450341a432ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a83f05f3433510a8ec7dd5c25c1269769f20318c0ea911bc8ba2fc6b6c8bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:22Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.771441 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de417bab319eefdb19fdb1206dc9a9f7e6342037972f02334c0bcda916bacef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:22Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.821587 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e66a9438e4d5bd8a49db8a1e27fecfdd5cc059e0abe08cdb9186149c77807f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:22Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.825476 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.825528 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.825545 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.825568 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.825585 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:22Z","lastTransitionTime":"2025-12-06T06:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.837546 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:22Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.851104 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:22Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.863533 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57k6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2bb8a5-743e-42ed-9f30-850690a30e47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zz4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57k6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:22Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.881747 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ab5550b-cf92-493f-9f47-fb90c2156346\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:24:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efb70151336c85359d59dd83510985c18a9b83b825b092a4a254f849c8532ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://859f228256697c8a05b042c2f79d6274d9a34365840c730488f5bd6f518f3bad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cdfbdcac1614d41694bc0f4c1d279bbd6f6a7a7d5841452f2c6b3641da48c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:24:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:22Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.899046 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-95qxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22972156-629d-4bc6-8108-9f50b7416afc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56968214a054fa7fc3abca868820f1cd7dcc4f3a7cb1150d5e2940588eb2ba3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e907660189498703147998de9a65eef56937a5e8278e24a316f1ed3ca67966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5376da633e7ad6c6102a2fada03662d1a7b86367b80ee4a9d5525ffc313e5be6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f816ce60b841b8c838807d18481c2584470cbdff65f4ba064cdccc0b945a8e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad9f3698e012d9264d7860e9bd34ba8737ced0d8bd48472048b9a94324fb4ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09361d6aa9f30156fc8b021d79d0c22bbf395f0963d3f10ac9bb0f5803797418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1db48b12938d48ce73bd1b72ba25a5e6a5640ad542bd8b9bb823c3bb1fca285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhjzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-95qxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:22Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.921821 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7a8c395-bca0-48a5-bb35-10e956e85a2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40de68b30aaf6ce3782c5d327a806ff7e1645ac533fc11832388b550a9fd3726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40de68b30aaf6ce3782c5d327a806ff7e1645ac533fc11832388b550a9fd3726\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T06:26:20Z\\\",\\\"message\\\":\\\"1206 06:26:20.474614 6852 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 06:26:20.474642 6852 factory.go:656] Stopping watch factory\\\\nI1206 06:26:20.474920 6852 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:26:20.475020 6852 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 06:26:20.475039 6852 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1206 06:26:20.475214 6852 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:26:20.475535 6852 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:26:20.476042 6852 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 06:26:20.476449 6852 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 06:26:20.476846 6852 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T06:26:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rr4m5_openshift-ovn-kubernetes(d7a8c395-bca0-48a5-bb35-10e956e85a2a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T06:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T06:25:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T06:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnbgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T06:25:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rr4m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:22Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.927531 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.927584 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.927597 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.927616 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:22 crc kubenswrapper[4823]: I1206 06:26:22.927629 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:22Z","lastTransitionTime":"2025-12-06T06:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.029983 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.030034 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.030046 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.030060 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.030071 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:23Z","lastTransitionTime":"2025-12-06T06:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.133304 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.133369 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.133420 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.133457 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.133479 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:23Z","lastTransitionTime":"2025-12-06T06:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.140972 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.140972 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.140985 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:23 crc kubenswrapper[4823]: E1206 06:26:23.141253 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:23 crc kubenswrapper[4823]: E1206 06:26:23.141339 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:23 crc kubenswrapper[4823]: E1206 06:26:23.141433 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.236435 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.236483 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.236492 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.236506 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.236515 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:23Z","lastTransitionTime":"2025-12-06T06:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.338404 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.338450 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.338464 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.338479 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.338490 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:23Z","lastTransitionTime":"2025-12-06T06:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.387048 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.387124 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.387138 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.387159 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.387172 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:23Z","lastTransitionTime":"2025-12-06T06:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:23 crc kubenswrapper[4823]: E1206 06:26:23.402523 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.407021 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.407058 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.407070 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.407085 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.407095 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:23Z","lastTransitionTime":"2025-12-06T06:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:23 crc kubenswrapper[4823]: E1206 06:26:23.420874 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.426215 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.426269 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.426283 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.426305 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.426318 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:23Z","lastTransitionTime":"2025-12-06T06:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:23 crc kubenswrapper[4823]: E1206 06:26:23.440877 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.446145 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.446183 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.446226 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.446246 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.446258 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:23Z","lastTransitionTime":"2025-12-06T06:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:23 crc kubenswrapper[4823]: E1206 06:26:23.460329 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.464382 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.464423 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.464436 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.464455 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.464464 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:23Z","lastTransitionTime":"2025-12-06T06:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:23 crc kubenswrapper[4823]: E1206 06:26:23.475323 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T06:26:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"120eea9f-209d-4622-89eb-9d0194df90a2\\\",\\\"systemUUID\\\":\\\"41501b97-4373-424f-8e6e-d4f001bb3d11\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T06:26:23Z is after 2025-08-24T17:21:41Z" Dec 06 06:26:23 crc kubenswrapper[4823]: E1206 06:26:23.475488 4823 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.477600 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.477730 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.477748 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.477769 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.477783 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:23Z","lastTransitionTime":"2025-12-06T06:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.580950 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.581011 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.581030 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.581086 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.581105 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:23Z","lastTransitionTime":"2025-12-06T06:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.684267 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.684364 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.684378 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.684397 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.684408 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:23Z","lastTransitionTime":"2025-12-06T06:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.787769 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.787812 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.787823 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.787841 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.787853 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:23Z","lastTransitionTime":"2025-12-06T06:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.891123 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.891168 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.891180 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.891205 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.891217 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:23Z","lastTransitionTime":"2025-12-06T06:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.995149 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.995231 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.995249 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.995271 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:23 crc kubenswrapper[4823]: I1206 06:26:23.995287 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:23Z","lastTransitionTime":"2025-12-06T06:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.098172 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.098212 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.098223 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.098240 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.098250 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:24Z","lastTransitionTime":"2025-12-06T06:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.140827 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:24 crc kubenswrapper[4823]: E1206 06:26:24.141032 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.201368 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.201413 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.201423 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.201443 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.201453 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:24Z","lastTransitionTime":"2025-12-06T06:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.304844 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.304894 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.304906 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.304926 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.304940 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:24Z","lastTransitionTime":"2025-12-06T06:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.407975 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.408033 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.408051 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.408079 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.408100 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:24Z","lastTransitionTime":"2025-12-06T06:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.510884 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.510945 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.510963 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.510987 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.511004 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:24Z","lastTransitionTime":"2025-12-06T06:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.614342 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.614386 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.614398 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.614414 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.614425 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:24Z","lastTransitionTime":"2025-12-06T06:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.720565 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.720591 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.720738 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.720756 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.720767 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:24Z","lastTransitionTime":"2025-12-06T06:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.824105 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.824163 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.824182 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.824201 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.824215 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:24Z","lastTransitionTime":"2025-12-06T06:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.927638 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.927711 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.927723 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.927744 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:24 crc kubenswrapper[4823]: I1206 06:26:24.927757 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:24Z","lastTransitionTime":"2025-12-06T06:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.030250 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.030293 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.030302 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.030319 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.030330 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:25Z","lastTransitionTime":"2025-12-06T06:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.133323 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.133348 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.133356 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.133369 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.133377 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:25Z","lastTransitionTime":"2025-12-06T06:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.185022 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.185114 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:25 crc kubenswrapper[4823]: E1206 06:26:25.185157 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:25 crc kubenswrapper[4823]: E1206 06:26:25.185326 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.185580 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:25 crc kubenswrapper[4823]: E1206 06:26:25.185725 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.235979 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.236029 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.236041 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.236060 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.236073 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:25Z","lastTransitionTime":"2025-12-06T06:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.338786 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.338826 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.338835 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.338851 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.338860 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:25Z","lastTransitionTime":"2025-12-06T06:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.441542 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.441593 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.441606 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.441627 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.441641 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:25Z","lastTransitionTime":"2025-12-06T06:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.544615 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.544654 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.544684 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.544698 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.544709 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:25Z","lastTransitionTime":"2025-12-06T06:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.647704 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.647861 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.647892 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.647922 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.647961 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:25Z","lastTransitionTime":"2025-12-06T06:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.750540 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.750575 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.750583 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.750597 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.750608 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:25Z","lastTransitionTime":"2025-12-06T06:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.853155 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.853194 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.853204 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.853217 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.853226 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:25Z","lastTransitionTime":"2025-12-06T06:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.955733 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.955771 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.955786 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.955802 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:25 crc kubenswrapper[4823]: I1206 06:26:25.955813 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:25Z","lastTransitionTime":"2025-12-06T06:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.058065 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.058104 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.058118 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.058135 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.058147 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:26Z","lastTransitionTime":"2025-12-06T06:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.140183 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:26 crc kubenswrapper[4823]: E1206 06:26:26.140819 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.160887 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.160945 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.160955 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.160976 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.160986 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:26Z","lastTransitionTime":"2025-12-06T06:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.263763 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.263810 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.263824 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.263843 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.263856 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:26Z","lastTransitionTime":"2025-12-06T06:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.366580 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.366628 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.366640 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.366656 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.366683 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:26Z","lastTransitionTime":"2025-12-06T06:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.469582 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.469612 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.469622 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.469639 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.469656 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:26Z","lastTransitionTime":"2025-12-06T06:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.572243 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.572275 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.572284 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.572297 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.572309 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:26Z","lastTransitionTime":"2025-12-06T06:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.675000 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.675051 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.675063 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.675082 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.675095 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:26Z","lastTransitionTime":"2025-12-06T06:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.777300 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.777401 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.777411 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.777492 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.777505 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:26Z","lastTransitionTime":"2025-12-06T06:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.880436 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.880469 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.880477 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.880490 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.880499 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:26Z","lastTransitionTime":"2025-12-06T06:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.983310 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.983342 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.983352 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.983367 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:26 crc kubenswrapper[4823]: I1206 06:26:26.983378 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:26Z","lastTransitionTime":"2025-12-06T06:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.086271 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.086308 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.086317 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.086332 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.086341 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:27Z","lastTransitionTime":"2025-12-06T06:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.140427 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.140481 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.140535 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:27 crc kubenswrapper[4823]: E1206 06:26:27.140568 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:27 crc kubenswrapper[4823]: E1206 06:26:27.140693 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:27 crc kubenswrapper[4823]: E1206 06:26:27.140811 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.188635 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.188695 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.188707 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.188724 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.188736 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:27Z","lastTransitionTime":"2025-12-06T06:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.291487 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.291535 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.291548 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.291564 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.291575 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:27Z","lastTransitionTime":"2025-12-06T06:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.394294 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.394345 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.394357 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.394376 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.394387 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:27Z","lastTransitionTime":"2025-12-06T06:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.497081 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.497123 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.497158 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.497192 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.497204 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:27Z","lastTransitionTime":"2025-12-06T06:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.599887 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.599926 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.599939 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.599955 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.599965 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:27Z","lastTransitionTime":"2025-12-06T06:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.702354 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.702405 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.702414 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.702428 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.702438 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:27Z","lastTransitionTime":"2025-12-06T06:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.806706 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.806748 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.806777 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.806804 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.806821 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:27Z","lastTransitionTime":"2025-12-06T06:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.909394 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.909429 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.909439 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.909454 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:27 crc kubenswrapper[4823]: I1206 06:26:27.909467 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:27Z","lastTransitionTime":"2025-12-06T06:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.012298 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.012340 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.012350 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.012366 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.012378 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:28Z","lastTransitionTime":"2025-12-06T06:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.114631 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.114688 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.114730 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.114748 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.114761 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:28Z","lastTransitionTime":"2025-12-06T06:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.140318 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:28 crc kubenswrapper[4823]: E1206 06:26:28.140489 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.217386 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.217432 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.217443 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.217459 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.217470 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:28Z","lastTransitionTime":"2025-12-06T06:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.320515 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.320582 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.320596 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.320619 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.320636 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:28Z","lastTransitionTime":"2025-12-06T06:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.423150 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.424081 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.424097 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.424130 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.424145 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:28Z","lastTransitionTime":"2025-12-06T06:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.528077 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.528126 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.528141 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.528167 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.528180 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:28Z","lastTransitionTime":"2025-12-06T06:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.632639 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.632704 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.632718 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.632736 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.632748 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:28Z","lastTransitionTime":"2025-12-06T06:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.836044 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.836091 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.836104 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.836120 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.836132 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:28Z","lastTransitionTime":"2025-12-06T06:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.939002 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.939050 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.939060 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.939079 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:28 crc kubenswrapper[4823]: I1206 06:26:28.939088 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:28Z","lastTransitionTime":"2025-12-06T06:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.041034 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.041063 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.041072 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.041084 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.041093 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:29Z","lastTransitionTime":"2025-12-06T06:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.139867 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.139916 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.140047 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:29 crc kubenswrapper[4823]: E1206 06:26:29.140869 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:29 crc kubenswrapper[4823]: E1206 06:26:29.140908 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:29 crc kubenswrapper[4823]: E1206 06:26:29.141081 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.142820 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.142848 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.142857 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.142870 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.142879 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:29Z","lastTransitionTime":"2025-12-06T06:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.162596 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=69.162577709 podStartE2EDuration="1m9.162577709s" podCreationTimestamp="2025-12-06 06:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:26:29.162162597 +0000 UTC m=+90.447914577" watchObservedRunningTime="2025-12-06 06:26:29.162577709 +0000 UTC m=+90.448329669" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.199969 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-95qxf" podStartSLOduration=66.199932246 podStartE2EDuration="1m6.199932246s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:26:29.179361927 +0000 UTC m=+90.465113887" watchObservedRunningTime="2025-12-06 06:26:29.199932246 +0000 UTC m=+90.485684206" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.223007 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-mv8th" podStartSLOduration=67.222987866 podStartE2EDuration="1m7.222987866s" podCreationTimestamp="2025-12-06 06:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:26:29.210582285 +0000 UTC m=+90.496334245" watchObservedRunningTime="2025-12-06 06:26:29.222987866 +0000 UTC m=+90.508739836" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.223120 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-bldh8" podStartSLOduration=66.22311398 podStartE2EDuration="1m6.22311398s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:26:29.222890644 +0000 UTC m=+90.508642604" watchObservedRunningTime="2025-12-06 06:26:29.22311398 +0000 UTC m=+90.508865960" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.238442 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xbg5l" podStartSLOduration=66.238423855 podStartE2EDuration="1m6.238423855s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:26:29.238078495 +0000 UTC m=+90.523830475" watchObservedRunningTime="2025-12-06 06:26:29.238423855 +0000 UTC m=+90.524175815" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.244306 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.244337 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.244346 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.244361 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.244371 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:29Z","lastTransitionTime":"2025-12-06T06:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.256350 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=16.256330266 podStartE2EDuration="16.256330266s" podCreationTimestamp="2025-12-06 06:26:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:26:29.256243834 +0000 UTC m=+90.541995794" watchObservedRunningTime="2025-12-06 06:26:29.256330266 +0000 UTC m=+90.542082226" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.277832 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=71.277811911 podStartE2EDuration="1m11.277811911s" podCreationTimestamp="2025-12-06 06:25:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:26:29.273730023 +0000 UTC m=+90.559482003" watchObservedRunningTime="2025-12-06 06:26:29.277811911 +0000 UTC m=+90.563563871" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.300366 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=41.300352617 podStartE2EDuration="41.300352617s" podCreationTimestamp="2025-12-06 06:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:26:29.299543164 +0000 UTC m=+90.585295124" watchObservedRunningTime="2025-12-06 06:26:29.300352617 +0000 UTC m=+90.586104577" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.346311 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.346343 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.346351 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.346364 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.346374 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:29Z","lastTransitionTime":"2025-12-06T06:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.358218 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-4h4hh" podStartSLOduration=67.35819891 podStartE2EDuration="1m7.35819891s" podCreationTimestamp="2025-12-06 06:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:26:29.35785661 +0000 UTC m=+90.643608570" watchObservedRunningTime="2025-12-06 06:26:29.35819891 +0000 UTC m=+90.643950870" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.370843 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podStartSLOduration=66.370826937 podStartE2EDuration="1m6.370826937s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:26:29.370029284 +0000 UTC m=+90.655781244" watchObservedRunningTime="2025-12-06 06:26:29.370826937 +0000 UTC m=+90.656578897" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.448270 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.448315 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.448323 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.448338 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.448348 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:29Z","lastTransitionTime":"2025-12-06T06:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.550328 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.550386 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.550401 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.550425 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.550439 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:29Z","lastTransitionTime":"2025-12-06T06:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.653307 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.653356 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.653365 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.653382 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.653395 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:29Z","lastTransitionTime":"2025-12-06T06:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.756748 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.756798 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.756807 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.756822 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.756832 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:29Z","lastTransitionTime":"2025-12-06T06:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.859273 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.859316 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.859330 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.859346 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.859356 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:29Z","lastTransitionTime":"2025-12-06T06:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.961942 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.961998 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.962009 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.962025 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:29 crc kubenswrapper[4823]: I1206 06:26:29.962035 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:29Z","lastTransitionTime":"2025-12-06T06:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.063867 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.063971 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.063987 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.064012 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.064028 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:30Z","lastTransitionTime":"2025-12-06T06:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.140155 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:30 crc kubenswrapper[4823]: E1206 06:26:30.140286 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.167055 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.167098 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.167110 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.167128 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.167137 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:30Z","lastTransitionTime":"2025-12-06T06:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.270281 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.270334 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.270342 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.270358 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.270368 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:30Z","lastTransitionTime":"2025-12-06T06:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.374397 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.374470 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.374493 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.374525 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.374543 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:30Z","lastTransitionTime":"2025-12-06T06:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.477595 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.477656 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.477715 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.477744 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.477764 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:30Z","lastTransitionTime":"2025-12-06T06:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.580994 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.581059 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.581072 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.581095 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.581136 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:30Z","lastTransitionTime":"2025-12-06T06:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.684132 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.684169 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.684179 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.684203 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.684221 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:30Z","lastTransitionTime":"2025-12-06T06:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.786907 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.786946 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.786955 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.786974 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.786984 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:30Z","lastTransitionTime":"2025-12-06T06:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.890515 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.890578 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.890593 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.890617 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.890634 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:30Z","lastTransitionTime":"2025-12-06T06:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.993254 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.993307 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.993318 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.993339 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:30 crc kubenswrapper[4823]: I1206 06:26:30.993356 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:30Z","lastTransitionTime":"2025-12-06T06:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.095551 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.095588 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.095597 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.095615 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.095624 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:31Z","lastTransitionTime":"2025-12-06T06:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.140027 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:31 crc kubenswrapper[4823]: E1206 06:26:31.140143 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.140318 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:31 crc kubenswrapper[4823]: E1206 06:26:31.140376 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.140912 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:31 crc kubenswrapper[4823]: E1206 06:26:31.141123 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.198061 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.198104 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.198114 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.198130 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.198148 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:31Z","lastTransitionTime":"2025-12-06T06:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.301563 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.301621 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.301637 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.301685 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.301700 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:31Z","lastTransitionTime":"2025-12-06T06:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.404097 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.404132 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.404139 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.404156 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.404165 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:31Z","lastTransitionTime":"2025-12-06T06:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.506734 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.506796 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.506809 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.506831 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.506845 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:31Z","lastTransitionTime":"2025-12-06T06:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.609798 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.609862 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.609875 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.609928 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.609942 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:31Z","lastTransitionTime":"2025-12-06T06:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.714203 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.714258 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.714272 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.714297 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.714313 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:31Z","lastTransitionTime":"2025-12-06T06:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.818020 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.818076 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.818089 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.818106 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.818117 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:31Z","lastTransitionTime":"2025-12-06T06:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.920584 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.920629 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.920637 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.920650 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:31 crc kubenswrapper[4823]: I1206 06:26:31.920683 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:31Z","lastTransitionTime":"2025-12-06T06:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.022564 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.022603 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.022638 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.022653 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.022678 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:32Z","lastTransitionTime":"2025-12-06T06:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.124731 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.124766 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.124781 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.124796 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.124806 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:32Z","lastTransitionTime":"2025-12-06T06:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.139643 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:32 crc kubenswrapper[4823]: E1206 06:26:32.139793 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.227149 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.227202 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.227213 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.227229 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.227238 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:32Z","lastTransitionTime":"2025-12-06T06:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.329728 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.329772 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.329783 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.329797 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.329807 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:32Z","lastTransitionTime":"2025-12-06T06:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.432460 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.432495 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.432505 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.432521 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.432533 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:32Z","lastTransitionTime":"2025-12-06T06:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.535046 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.535079 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.535088 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.535100 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.535109 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:32Z","lastTransitionTime":"2025-12-06T06:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.636794 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.636830 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.636848 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.636868 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.636878 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:32Z","lastTransitionTime":"2025-12-06T06:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.740026 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.740074 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.740086 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.740107 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.740121 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:32Z","lastTransitionTime":"2025-12-06T06:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.843682 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.843733 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.843748 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.843767 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.843780 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:32Z","lastTransitionTime":"2025-12-06T06:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.946453 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.946527 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.946542 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.946564 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:32 crc kubenswrapper[4823]: I1206 06:26:32.946576 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:32Z","lastTransitionTime":"2025-12-06T06:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.049041 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.049092 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.049109 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.049127 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.049139 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:33Z","lastTransitionTime":"2025-12-06T06:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.140384 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.140464 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:33 crc kubenswrapper[4823]: E1206 06:26:33.140506 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.140534 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:33 crc kubenswrapper[4823]: E1206 06:26:33.140887 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:33 crc kubenswrapper[4823]: E1206 06:26:33.140967 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.151163 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.151195 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.151204 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.151215 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.151224 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:33Z","lastTransitionTime":"2025-12-06T06:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.253841 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.253879 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.253896 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.253912 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.253923 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:33Z","lastTransitionTime":"2025-12-06T06:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.357840 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.357904 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.357914 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.357934 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.357947 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:33Z","lastTransitionTime":"2025-12-06T06:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.460939 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.460997 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.461009 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.461034 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.461049 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:33Z","lastTransitionTime":"2025-12-06T06:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.564369 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.564409 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.564419 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.564435 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.564447 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:33Z","lastTransitionTime":"2025-12-06T06:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.666297 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.666346 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.666371 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.666384 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.666394 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:33Z","lastTransitionTime":"2025-12-06T06:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.769218 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.769294 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.769307 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.769324 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.769336 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:33Z","lastTransitionTime":"2025-12-06T06:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.852730 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.852779 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.852790 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.852806 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.852816 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T06:26:33Z","lastTransitionTime":"2025-12-06T06:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.892502 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvft7"] Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.892908 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvft7" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.895231 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.896171 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.896283 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 06 06:26:33 crc kubenswrapper[4823]: I1206 06:26:33.896869 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 06 06:26:34 crc kubenswrapper[4823]: I1206 06:26:34.001308 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/149db7b1-9f3b-4c6a-be64-91b023074e40-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cvft7\" (UID: \"149db7b1-9f3b-4c6a-be64-91b023074e40\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvft7" Dec 06 06:26:34 crc kubenswrapper[4823]: I1206 06:26:34.001567 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/149db7b1-9f3b-4c6a-be64-91b023074e40-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cvft7\" (UID: \"149db7b1-9f3b-4c6a-be64-91b023074e40\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvft7" Dec 06 06:26:34 crc kubenswrapper[4823]: I1206 06:26:34.001600 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/149db7b1-9f3b-4c6a-be64-91b023074e40-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cvft7\" (UID: \"149db7b1-9f3b-4c6a-be64-91b023074e40\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvft7" Dec 06 06:26:34 crc kubenswrapper[4823]: I1206 06:26:34.001631 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/149db7b1-9f3b-4c6a-be64-91b023074e40-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cvft7\" (UID: \"149db7b1-9f3b-4c6a-be64-91b023074e40\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvft7" Dec 06 06:26:34 crc kubenswrapper[4823]: I1206 06:26:34.001684 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/149db7b1-9f3b-4c6a-be64-91b023074e40-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cvft7\" (UID: \"149db7b1-9f3b-4c6a-be64-91b023074e40\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvft7" Dec 06 06:26:34 crc kubenswrapper[4823]: I1206 06:26:34.102507 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/149db7b1-9f3b-4c6a-be64-91b023074e40-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cvft7\" (UID: \"149db7b1-9f3b-4c6a-be64-91b023074e40\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvft7" Dec 06 06:26:34 crc kubenswrapper[4823]: I1206 06:26:34.102581 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/149db7b1-9f3b-4c6a-be64-91b023074e40-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cvft7\" (UID: \"149db7b1-9f3b-4c6a-be64-91b023074e40\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvft7" Dec 06 06:26:34 crc kubenswrapper[4823]: I1206 06:26:34.102630 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/149db7b1-9f3b-4c6a-be64-91b023074e40-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cvft7\" (UID: \"149db7b1-9f3b-4c6a-be64-91b023074e40\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvft7" Dec 06 06:26:34 crc kubenswrapper[4823]: I1206 06:26:34.102687 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/149db7b1-9f3b-4c6a-be64-91b023074e40-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cvft7\" (UID: \"149db7b1-9f3b-4c6a-be64-91b023074e40\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvft7" Dec 06 06:26:34 crc kubenswrapper[4823]: I1206 06:26:34.102723 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/149db7b1-9f3b-4c6a-be64-91b023074e40-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cvft7\" (UID: \"149db7b1-9f3b-4c6a-be64-91b023074e40\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvft7" Dec 06 06:26:34 crc kubenswrapper[4823]: I1206 06:26:34.102809 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/149db7b1-9f3b-4c6a-be64-91b023074e40-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cvft7\" (UID: \"149db7b1-9f3b-4c6a-be64-91b023074e40\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvft7" Dec 06 06:26:34 crc kubenswrapper[4823]: I1206 06:26:34.102948 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/149db7b1-9f3b-4c6a-be64-91b023074e40-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cvft7\" (UID: \"149db7b1-9f3b-4c6a-be64-91b023074e40\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvft7" Dec 06 06:26:34 crc kubenswrapper[4823]: I1206 06:26:34.103794 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/149db7b1-9f3b-4c6a-be64-91b023074e40-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cvft7\" (UID: \"149db7b1-9f3b-4c6a-be64-91b023074e40\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvft7" Dec 06 06:26:34 crc kubenswrapper[4823]: I1206 06:26:34.115758 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/149db7b1-9f3b-4c6a-be64-91b023074e40-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cvft7\" (UID: \"149db7b1-9f3b-4c6a-be64-91b023074e40\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvft7" Dec 06 06:26:34 crc kubenswrapper[4823]: I1206 06:26:34.119280 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/149db7b1-9f3b-4c6a-be64-91b023074e40-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cvft7\" (UID: \"149db7b1-9f3b-4c6a-be64-91b023074e40\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvft7" Dec 06 06:26:34 crc kubenswrapper[4823]: I1206 06:26:34.140574 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:34 crc kubenswrapper[4823]: E1206 06:26:34.141027 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:34 crc kubenswrapper[4823]: I1206 06:26:34.209283 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvft7" Dec 06 06:26:34 crc kubenswrapper[4823]: I1206 06:26:34.652193 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvft7" event={"ID":"149db7b1-9f3b-4c6a-be64-91b023074e40","Type":"ContainerStarted","Data":"4de639882642d93da4784be8182a216bda9e2f6fb668cc6b0687a8408b96f263"} Dec 06 06:26:34 crc kubenswrapper[4823]: I1206 06:26:34.652240 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvft7" event={"ID":"149db7b1-9f3b-4c6a-be64-91b023074e40","Type":"ContainerStarted","Data":"a8d8a73dbbecc2f657a4124d765b98e5036d92e205982b1a7d5687fdeb05e0dc"} Dec 06 06:26:34 crc kubenswrapper[4823]: I1206 06:26:34.666609 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvft7" podStartSLOduration=71.666593605 podStartE2EDuration="1m11.666593605s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:26:34.66574133 +0000 UTC m=+95.951493290" watchObservedRunningTime="2025-12-06 06:26:34.666593605 +0000 UTC m=+95.952345555" Dec 06 06:26:35 crc kubenswrapper[4823]: I1206 06:26:35.140093 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:35 crc kubenswrapper[4823]: I1206 06:26:35.140111 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:35 crc kubenswrapper[4823]: I1206 06:26:35.140108 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:35 crc kubenswrapper[4823]: E1206 06:26:35.140684 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:35 crc kubenswrapper[4823]: E1206 06:26:35.140773 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:35 crc kubenswrapper[4823]: E1206 06:26:35.141053 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:35 crc kubenswrapper[4823]: I1206 06:26:35.161442 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 06 06:26:36 crc kubenswrapper[4823]: I1206 06:26:36.140218 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:36 crc kubenswrapper[4823]: E1206 06:26:36.140364 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:37 crc kubenswrapper[4823]: I1206 06:26:37.139884 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:37 crc kubenswrapper[4823]: E1206 06:26:37.140258 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:37 crc kubenswrapper[4823]: I1206 06:26:37.140110 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:37 crc kubenswrapper[4823]: I1206 06:26:37.140000 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:37 crc kubenswrapper[4823]: E1206 06:26:37.140895 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:37 crc kubenswrapper[4823]: E1206 06:26:37.140978 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:38 crc kubenswrapper[4823]: I1206 06:26:38.140613 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:38 crc kubenswrapper[4823]: E1206 06:26:38.140777 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:38 crc kubenswrapper[4823]: I1206 06:26:38.141452 4823 scope.go:117] "RemoveContainer" containerID="40de68b30aaf6ce3782c5d327a806ff7e1645ac533fc11832388b550a9fd3726" Dec 06 06:26:38 crc kubenswrapper[4823]: E1206 06:26:38.141603 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rr4m5_openshift-ovn-kubernetes(d7a8c395-bca0-48a5-bb35-10e956e85a2a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" Dec 06 06:26:39 crc kubenswrapper[4823]: I1206 06:26:39.140489 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:39 crc kubenswrapper[4823]: I1206 06:26:39.140503 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:39 crc kubenswrapper[4823]: E1206 06:26:39.142367 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:39 crc kubenswrapper[4823]: I1206 06:26:39.142403 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:39 crc kubenswrapper[4823]: E1206 06:26:39.142494 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:39 crc kubenswrapper[4823]: E1206 06:26:39.142723 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:39 crc kubenswrapper[4823]: I1206 06:26:39.171279 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=4.171253988 podStartE2EDuration="4.171253988s" podCreationTimestamp="2025-12-06 06:26:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:26:39.170916338 +0000 UTC m=+100.456668318" watchObservedRunningTime="2025-12-06 06:26:39.171253988 +0000 UTC m=+100.457005988" Dec 06 06:26:40 crc kubenswrapper[4823]: I1206 06:26:40.140087 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:40 crc kubenswrapper[4823]: E1206 06:26:40.140405 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:41 crc kubenswrapper[4823]: I1206 06:26:41.140242 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:41 crc kubenswrapper[4823]: I1206 06:26:41.140268 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:41 crc kubenswrapper[4823]: I1206 06:26:41.140272 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:41 crc kubenswrapper[4823]: E1206 06:26:41.140367 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:41 crc kubenswrapper[4823]: E1206 06:26:41.140507 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:41 crc kubenswrapper[4823]: E1206 06:26:41.140592 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:41 crc kubenswrapper[4823]: I1206 06:26:41.684217 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs\") pod \"network-metrics-daemon-57k6t\" (UID: \"5a2bb8a5-743e-42ed-9f30-850690a30e47\") " pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:41 crc kubenswrapper[4823]: E1206 06:26:41.684486 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:26:41 crc kubenswrapper[4823]: E1206 06:26:41.684593 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs podName:5a2bb8a5-743e-42ed-9f30-850690a30e47 nodeName:}" failed. No retries permitted until 2025-12-06 06:27:45.684559017 +0000 UTC m=+166.970311017 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs") pod "network-metrics-daemon-57k6t" (UID: "5a2bb8a5-743e-42ed-9f30-850690a30e47") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 06:26:42 crc kubenswrapper[4823]: I1206 06:26:42.139680 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:42 crc kubenswrapper[4823]: E1206 06:26:42.139980 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:43 crc kubenswrapper[4823]: I1206 06:26:43.139851 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:43 crc kubenswrapper[4823]: E1206 06:26:43.139961 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:43 crc kubenswrapper[4823]: I1206 06:26:43.140074 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:43 crc kubenswrapper[4823]: I1206 06:26:43.140146 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:43 crc kubenswrapper[4823]: E1206 06:26:43.140338 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:43 crc kubenswrapper[4823]: E1206 06:26:43.140556 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:44 crc kubenswrapper[4823]: I1206 06:26:44.140193 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:44 crc kubenswrapper[4823]: E1206 06:26:44.140361 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:45 crc kubenswrapper[4823]: I1206 06:26:45.140771 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:45 crc kubenswrapper[4823]: I1206 06:26:45.140877 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:45 crc kubenswrapper[4823]: E1206 06:26:45.140929 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:45 crc kubenswrapper[4823]: I1206 06:26:45.140962 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:45 crc kubenswrapper[4823]: E1206 06:26:45.141046 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:45 crc kubenswrapper[4823]: E1206 06:26:45.141116 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:46 crc kubenswrapper[4823]: I1206 06:26:46.140427 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:46 crc kubenswrapper[4823]: E1206 06:26:46.140564 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:47 crc kubenswrapper[4823]: I1206 06:26:47.141396 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:47 crc kubenswrapper[4823]: I1206 06:26:47.141431 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:47 crc kubenswrapper[4823]: I1206 06:26:47.141481 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:47 crc kubenswrapper[4823]: E1206 06:26:47.141496 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:47 crc kubenswrapper[4823]: E1206 06:26:47.141591 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:47 crc kubenswrapper[4823]: E1206 06:26:47.141688 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:48 crc kubenswrapper[4823]: I1206 06:26:48.139687 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:48 crc kubenswrapper[4823]: E1206 06:26:48.139826 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:49 crc kubenswrapper[4823]: I1206 06:26:49.140163 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:49 crc kubenswrapper[4823]: I1206 06:26:49.140214 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:49 crc kubenswrapper[4823]: I1206 06:26:49.140870 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:49 crc kubenswrapper[4823]: E1206 06:26:49.141485 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:49 crc kubenswrapper[4823]: E1206 06:26:49.141715 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:49 crc kubenswrapper[4823]: E1206 06:26:49.142076 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:49 crc kubenswrapper[4823]: I1206 06:26:49.142388 4823 scope.go:117] "RemoveContainer" containerID="40de68b30aaf6ce3782c5d327a806ff7e1645ac533fc11832388b550a9fd3726" Dec 06 06:26:49 crc kubenswrapper[4823]: E1206 06:26:49.142534 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rr4m5_openshift-ovn-kubernetes(d7a8c395-bca0-48a5-bb35-10e956e85a2a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" Dec 06 06:26:50 crc kubenswrapper[4823]: I1206 06:26:50.140249 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:50 crc kubenswrapper[4823]: E1206 06:26:50.140403 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:51 crc kubenswrapper[4823]: I1206 06:26:51.140188 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:51 crc kubenswrapper[4823]: I1206 06:26:51.140237 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:51 crc kubenswrapper[4823]: E1206 06:26:51.140344 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:51 crc kubenswrapper[4823]: E1206 06:26:51.140441 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:51 crc kubenswrapper[4823]: I1206 06:26:51.140485 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:51 crc kubenswrapper[4823]: E1206 06:26:51.140591 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:52 crc kubenswrapper[4823]: I1206 06:26:52.140581 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:52 crc kubenswrapper[4823]: E1206 06:26:52.140686 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:53 crc kubenswrapper[4823]: I1206 06:26:53.139879 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:53 crc kubenswrapper[4823]: E1206 06:26:53.140048 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:53 crc kubenswrapper[4823]: I1206 06:26:53.140170 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:53 crc kubenswrapper[4823]: I1206 06:26:53.140196 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:53 crc kubenswrapper[4823]: E1206 06:26:53.140320 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:53 crc kubenswrapper[4823]: E1206 06:26:53.140444 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:54 crc kubenswrapper[4823]: I1206 06:26:54.140347 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:54 crc kubenswrapper[4823]: E1206 06:26:54.140506 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:55 crc kubenswrapper[4823]: I1206 06:26:55.140627 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:55 crc kubenswrapper[4823]: E1206 06:26:55.140823 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:55 crc kubenswrapper[4823]: I1206 06:26:55.140926 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:55 crc kubenswrapper[4823]: E1206 06:26:55.141174 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:55 crc kubenswrapper[4823]: I1206 06:26:55.141460 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:55 crc kubenswrapper[4823]: E1206 06:26:55.141582 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:56 crc kubenswrapper[4823]: I1206 06:26:56.140102 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:56 crc kubenswrapper[4823]: E1206 06:26:56.140271 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:56 crc kubenswrapper[4823]: I1206 06:26:56.723055 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bldh8_e2faf943-388e-4105-a30d-b0bbb041f8e0/kube-multus/1.log" Dec 06 06:26:56 crc kubenswrapper[4823]: I1206 06:26:56.723688 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bldh8_e2faf943-388e-4105-a30d-b0bbb041f8e0/kube-multus/0.log" Dec 06 06:26:56 crc kubenswrapper[4823]: I1206 06:26:56.723735 4823 generic.go:334] "Generic (PLEG): container finished" podID="e2faf943-388e-4105-a30d-b0bbb041f8e0" containerID="31fc1a3302d6dbc392cfb5425747a5c31475388f6af4c498ecc75f33ce7740b2" exitCode=1 Dec 06 06:26:56 crc kubenswrapper[4823]: I1206 06:26:56.723767 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bldh8" event={"ID":"e2faf943-388e-4105-a30d-b0bbb041f8e0","Type":"ContainerDied","Data":"31fc1a3302d6dbc392cfb5425747a5c31475388f6af4c498ecc75f33ce7740b2"} Dec 06 06:26:56 crc kubenswrapper[4823]: I1206 06:26:56.723803 4823 scope.go:117] "RemoveContainer" containerID="df4650f938b2b11892bfcac82e396e83654e314befc8fc6cb94bf74c401730d7" Dec 06 06:26:56 crc kubenswrapper[4823]: I1206 06:26:56.724406 4823 scope.go:117] "RemoveContainer" containerID="31fc1a3302d6dbc392cfb5425747a5c31475388f6af4c498ecc75f33ce7740b2" Dec 06 06:26:56 crc kubenswrapper[4823]: E1206 06:26:56.724738 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-bldh8_openshift-multus(e2faf943-388e-4105-a30d-b0bbb041f8e0)\"" pod="openshift-multus/multus-bldh8" podUID="e2faf943-388e-4105-a30d-b0bbb041f8e0" Dec 06 06:26:57 crc kubenswrapper[4823]: I1206 06:26:57.139743 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:57 crc kubenswrapper[4823]: E1206 06:26:57.139866 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:57 crc kubenswrapper[4823]: I1206 06:26:57.139748 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:57 crc kubenswrapper[4823]: E1206 06:26:57.139967 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:57 crc kubenswrapper[4823]: I1206 06:26:57.139742 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:57 crc kubenswrapper[4823]: E1206 06:26:57.140036 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:57 crc kubenswrapper[4823]: I1206 06:26:57.729034 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bldh8_e2faf943-388e-4105-a30d-b0bbb041f8e0/kube-multus/1.log" Dec 06 06:26:58 crc kubenswrapper[4823]: I1206 06:26:58.139834 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:26:58 crc kubenswrapper[4823]: E1206 06:26:58.139974 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:26:59 crc kubenswrapper[4823]: E1206 06:26:59.089262 4823 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 06 06:26:59 crc kubenswrapper[4823]: I1206 06:26:59.139975 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:26:59 crc kubenswrapper[4823]: I1206 06:26:59.139977 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:26:59 crc kubenswrapper[4823]: I1206 06:26:59.139994 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:26:59 crc kubenswrapper[4823]: E1206 06:26:59.141201 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:26:59 crc kubenswrapper[4823]: E1206 06:26:59.141268 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:26:59 crc kubenswrapper[4823]: E1206 06:26:59.141333 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:26:59 crc kubenswrapper[4823]: E1206 06:26:59.230629 4823 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 06 06:27:00 crc kubenswrapper[4823]: I1206 06:27:00.140081 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:27:00 crc kubenswrapper[4823]: E1206 06:27:00.140800 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:27:00 crc kubenswrapper[4823]: I1206 06:27:00.141099 4823 scope.go:117] "RemoveContainer" containerID="40de68b30aaf6ce3782c5d327a806ff7e1645ac533fc11832388b550a9fd3726" Dec 06 06:27:00 crc kubenswrapper[4823]: E1206 06:27:00.141491 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rr4m5_openshift-ovn-kubernetes(d7a8c395-bca0-48a5-bb35-10e956e85a2a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" Dec 06 06:27:01 crc kubenswrapper[4823]: I1206 06:27:01.140546 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:27:01 crc kubenswrapper[4823]: I1206 06:27:01.140546 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:27:01 crc kubenswrapper[4823]: E1206 06:27:01.140709 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:27:01 crc kubenswrapper[4823]: E1206 06:27:01.140799 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:27:01 crc kubenswrapper[4823]: I1206 06:27:01.140798 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:27:01 crc kubenswrapper[4823]: E1206 06:27:01.140910 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:27:02 crc kubenswrapper[4823]: I1206 06:27:02.139905 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:27:02 crc kubenswrapper[4823]: E1206 06:27:02.140065 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:27:03 crc kubenswrapper[4823]: I1206 06:27:03.140184 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:27:03 crc kubenswrapper[4823]: I1206 06:27:03.140248 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:27:03 crc kubenswrapper[4823]: I1206 06:27:03.140205 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:27:03 crc kubenswrapper[4823]: E1206 06:27:03.140414 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:27:03 crc kubenswrapper[4823]: E1206 06:27:03.140535 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:27:03 crc kubenswrapper[4823]: E1206 06:27:03.140657 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:27:04 crc kubenswrapper[4823]: I1206 06:27:04.139964 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:27:04 crc kubenswrapper[4823]: E1206 06:27:04.140102 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:27:04 crc kubenswrapper[4823]: E1206 06:27:04.232262 4823 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 06 06:27:05 crc kubenswrapper[4823]: I1206 06:27:05.140021 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:27:05 crc kubenswrapper[4823]: I1206 06:27:05.140184 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:27:05 crc kubenswrapper[4823]: I1206 06:27:05.140333 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:27:05 crc kubenswrapper[4823]: E1206 06:27:05.140324 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:27:05 crc kubenswrapper[4823]: E1206 06:27:05.140464 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:27:05 crc kubenswrapper[4823]: E1206 06:27:05.140611 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:27:06 crc kubenswrapper[4823]: I1206 06:27:06.139861 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:27:06 crc kubenswrapper[4823]: E1206 06:27:06.140002 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:27:07 crc kubenswrapper[4823]: I1206 06:27:07.140242 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:27:07 crc kubenswrapper[4823]: I1206 06:27:07.140243 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:27:07 crc kubenswrapper[4823]: E1206 06:27:07.140388 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:27:07 crc kubenswrapper[4823]: E1206 06:27:07.140446 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:27:07 crc kubenswrapper[4823]: I1206 06:27:07.140242 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:27:07 crc kubenswrapper[4823]: E1206 06:27:07.140536 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:27:08 crc kubenswrapper[4823]: I1206 06:27:08.140703 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:27:08 crc kubenswrapper[4823]: E1206 06:27:08.140850 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:27:09 crc kubenswrapper[4823]: I1206 06:27:09.140459 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:27:09 crc kubenswrapper[4823]: I1206 06:27:09.140459 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:27:09 crc kubenswrapper[4823]: E1206 06:27:09.141929 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:27:09 crc kubenswrapper[4823]: I1206 06:27:09.142064 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:27:09 crc kubenswrapper[4823]: E1206 06:27:09.142122 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:27:09 crc kubenswrapper[4823]: E1206 06:27:09.142286 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:27:09 crc kubenswrapper[4823]: E1206 06:27:09.232822 4823 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 06 06:27:10 crc kubenswrapper[4823]: I1206 06:27:10.140576 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:27:10 crc kubenswrapper[4823]: E1206 06:27:10.140732 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:27:11 crc kubenswrapper[4823]: I1206 06:27:11.140007 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:27:11 crc kubenswrapper[4823]: I1206 06:27:11.140051 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:27:11 crc kubenswrapper[4823]: I1206 06:27:11.140279 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:27:11 crc kubenswrapper[4823]: I1206 06:27:11.140745 4823 scope.go:117] "RemoveContainer" containerID="31fc1a3302d6dbc392cfb5425747a5c31475388f6af4c498ecc75f33ce7740b2" Dec 06 06:27:11 crc kubenswrapper[4823]: E1206 06:27:11.140771 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:27:11 crc kubenswrapper[4823]: E1206 06:27:11.141071 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:27:11 crc kubenswrapper[4823]: E1206 06:27:11.141174 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:27:11 crc kubenswrapper[4823]: I1206 06:27:11.775823 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bldh8_e2faf943-388e-4105-a30d-b0bbb041f8e0/kube-multus/1.log" Dec 06 06:27:11 crc kubenswrapper[4823]: I1206 06:27:11.775903 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bldh8" event={"ID":"e2faf943-388e-4105-a30d-b0bbb041f8e0","Type":"ContainerStarted","Data":"38337c6d04bc6b2fa4ecc741d7ce7660c69d8d9d203cc577034850b4c54a80af"} Dec 06 06:27:12 crc kubenswrapper[4823]: I1206 06:27:12.140233 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:27:12 crc kubenswrapper[4823]: E1206 06:27:12.140713 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:27:13 crc kubenswrapper[4823]: I1206 06:27:13.140603 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:27:13 crc kubenswrapper[4823]: I1206 06:27:13.140716 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:27:13 crc kubenswrapper[4823]: I1206 06:27:13.140655 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:27:13 crc kubenswrapper[4823]: E1206 06:27:13.140881 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:27:13 crc kubenswrapper[4823]: E1206 06:27:13.140964 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:27:13 crc kubenswrapper[4823]: E1206 06:27:13.141017 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:27:13 crc kubenswrapper[4823]: I1206 06:27:13.141630 4823 scope.go:117] "RemoveContainer" containerID="40de68b30aaf6ce3782c5d327a806ff7e1645ac533fc11832388b550a9fd3726" Dec 06 06:27:14 crc kubenswrapper[4823]: I1206 06:27:14.140062 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:27:14 crc kubenswrapper[4823]: E1206 06:27:14.140242 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:27:14 crc kubenswrapper[4823]: E1206 06:27:14.234708 4823 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 06 06:27:14 crc kubenswrapper[4823]: I1206 06:27:14.635693 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-57k6t"] Dec 06 06:27:14 crc kubenswrapper[4823]: I1206 06:27:14.635800 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:27:14 crc kubenswrapper[4823]: E1206 06:27:14.635905 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:27:14 crc kubenswrapper[4823]: I1206 06:27:14.787605 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rr4m5_d7a8c395-bca0-48a5-bb35-10e956e85a2a/ovnkube-controller/3.log" Dec 06 06:27:14 crc kubenswrapper[4823]: I1206 06:27:14.789953 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" event={"ID":"d7a8c395-bca0-48a5-bb35-10e956e85a2a","Type":"ContainerStarted","Data":"9961728a73d27d7249b1c1628309f8bdf627d8fc1d08120ec5b900a351b6ff9c"} Dec 06 06:27:14 crc kubenswrapper[4823]: I1206 06:27:14.790396 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:27:14 crc kubenswrapper[4823]: I1206 06:27:14.816261 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" podStartSLOduration=111.816245241 podStartE2EDuration="1m51.816245241s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:14.815489491 +0000 UTC m=+136.101241451" watchObservedRunningTime="2025-12-06 06:27:14.816245241 +0000 UTC m=+136.101997201" Dec 06 06:27:15 crc kubenswrapper[4823]: I1206 06:27:15.140000 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:27:15 crc kubenswrapper[4823]: I1206 06:27:15.140083 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:27:15 crc kubenswrapper[4823]: E1206 06:27:15.140173 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:27:15 crc kubenswrapper[4823]: E1206 06:27:15.140296 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:27:16 crc kubenswrapper[4823]: I1206 06:27:16.139966 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:27:16 crc kubenswrapper[4823]: I1206 06:27:16.140047 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:27:16 crc kubenswrapper[4823]: E1206 06:27:16.140137 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:27:16 crc kubenswrapper[4823]: E1206 06:27:16.140226 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:27:17 crc kubenswrapper[4823]: I1206 06:27:17.139994 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:27:17 crc kubenswrapper[4823]: I1206 06:27:17.140012 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:27:17 crc kubenswrapper[4823]: E1206 06:27:17.140654 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:27:17 crc kubenswrapper[4823]: E1206 06:27:17.140784 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:27:18 crc kubenswrapper[4823]: I1206 06:27:18.139884 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:27:18 crc kubenswrapper[4823]: I1206 06:27:18.139884 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:27:18 crc kubenswrapper[4823]: E1206 06:27:18.140082 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57k6t" podUID="5a2bb8a5-743e-42ed-9f30-850690a30e47" Dec 06 06:27:18 crc kubenswrapper[4823]: E1206 06:27:18.140006 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 06:27:19 crc kubenswrapper[4823]: I1206 06:27:19.140578 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:27:19 crc kubenswrapper[4823]: I1206 06:27:19.140578 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:27:19 crc kubenswrapper[4823]: E1206 06:27:19.141798 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 06:27:19 crc kubenswrapper[4823]: E1206 06:27:19.141885 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 06:27:20 crc kubenswrapper[4823]: I1206 06:27:20.140606 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:27:20 crc kubenswrapper[4823]: I1206 06:27:20.140961 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:27:20 crc kubenswrapper[4823]: I1206 06:27:20.144441 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 06 06:27:20 crc kubenswrapper[4823]: I1206 06:27:20.144752 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 06 06:27:20 crc kubenswrapper[4823]: I1206 06:27:20.144441 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 06 06:27:20 crc kubenswrapper[4823]: I1206 06:27:20.151687 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 06 06:27:21 crc kubenswrapper[4823]: I1206 06:27:21.139997 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:27:21 crc kubenswrapper[4823]: I1206 06:27:21.140064 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:27:21 crc kubenswrapper[4823]: I1206 06:27:21.142708 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 06 06:27:21 crc kubenswrapper[4823]: I1206 06:27:21.142965 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.754477 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.799619 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8slc"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.800158 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8slc" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.800505 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5rbww"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.800892 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5rbww" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.801496 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-9g789"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.802151 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-9g789" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.804706 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.804977 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.804990 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.805143 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.806564 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xmzfs"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.807274 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-xmzfs" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.807426 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq76q"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.807986 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq76q" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.812740 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.814069 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.814470 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.815037 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.816539 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.817050 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.817557 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.817791 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.817944 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.818720 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.819789 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.820048 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.820287 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.820518 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.820840 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.821086 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.821228 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.821358 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.821406 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.821489 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.821516 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.821148 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.823441 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-96764"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.824156 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-96764" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.824533 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-j27ls"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.825061 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-j27ls" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.825087 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qcghw"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.825642 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.826329 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-wzsch"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.826651 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.828119 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rnpnm"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.828589 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bmbbn"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.829009 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bmbbn" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.829233 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.829011 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.841279 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-qtfn6"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.842341 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qtfn6" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.854090 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.856334 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fwj5n"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.856815 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.857276 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.857507 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.857820 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.858406 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.858628 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.859696 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.859655 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.861298 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.861554 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.861967 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.862005 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.862418 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.862758 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.863646 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.863945 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.864251 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.864423 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.864433 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.864768 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.864822 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.864839 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.864772 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.865061 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.865071 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.865113 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.865212 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.865291 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.866409 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.866701 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.866849 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.866936 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.866846 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.875726 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.875909 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b88f4c63-eec1-4f9c-97d9-5d0b1eae95a1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-l8slc\" (UID: \"b88f4c63-eec1-4f9c-97d9-5d0b1eae95a1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8slc" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.875953 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5f016d4-304f-4e8b-b0d8-9445bd44f6d2-audit-dir\") pod \"apiserver-7bbb656c7d-cwrr7\" (UID: \"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.875977 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e802aa0a-cd13-43df-be69-40b0bca7200f-oauth-serving-cert\") pod \"console-f9d7485db-wzsch\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876026 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e-client-ca\") pod \"route-controller-manager-6576b87f9c-zchg5\" (UID: \"384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876050 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-audit-dir\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876071 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/913bebf0-c7cd-40f4-b429-fe18368c8076-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5rbww\" (UID: \"913bebf0-c7cd-40f4-b429-fe18368c8076\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5rbww" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876104 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15cb22b1-04c0-45b6-81fa-9cb976a1aecb-trusted-ca\") pod \"console-operator-58897d9998-j27ls\" (UID: \"15cb22b1-04c0-45b6-81fa-9cb976a1aecb\") " pod="openshift-console-operator/console-operator-58897d9998-j27ls" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876126 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pw6k\" (UniqueName: \"kubernetes.io/projected/7df56bd6-4c2c-4432-b312-51019bf0f458-kube-api-access-7pw6k\") pod \"openshift-config-operator-7777fb866f-96764\" (UID: \"7df56bd6-4c2c-4432-b312-51019bf0f458\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-96764" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876147 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e-serving-cert\") pod \"route-controller-manager-6576b87f9c-zchg5\" (UID: \"384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876170 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-client-ca\") pod \"controller-manager-879f6c89f-rnpnm\" (UID: \"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876193 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv7lm\" (UniqueName: \"kubernetes.io/projected/b88f4c63-eec1-4f9c-97d9-5d0b1eae95a1-kube-api-access-bv7lm\") pod \"cluster-image-registry-operator-dc59b4c8b-l8slc\" (UID: \"b88f4c63-eec1-4f9c-97d9-5d0b1eae95a1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8slc" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876213 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-config\") pod \"controller-manager-879f6c89f-rnpnm\" (UID: \"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876235 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f5f016d4-304f-4e8b-b0d8-9445bd44f6d2-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-cwrr7\" (UID: \"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876254 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-audit-policies\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876275 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e640ad8-265d-4c39-976e-3e772057d0d0-serving-cert\") pod \"authentication-operator-69f744f599-xmzfs\" (UID: \"0e640ad8-265d-4c39-976e-3e772057d0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmzfs" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876297 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjzvw\" (UniqueName: \"kubernetes.io/projected/913bebf0-c7cd-40f4-b429-fe18368c8076-kube-api-access-tjzvw\") pod \"machine-api-operator-5694c8668f-5rbww\" (UID: \"913bebf0-c7cd-40f4-b429-fe18368c8076\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5rbww" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876324 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e802aa0a-cd13-43df-be69-40b0bca7200f-console-serving-cert\") pod \"console-f9d7485db-wzsch\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876350 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4351e984-b1eb-4ffc-96b8-5e37536b79da-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bmbbn\" (UID: \"4351e984-b1eb-4ffc-96b8-5e37536b79da\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bmbbn" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876372 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-serving-cert\") pod \"controller-manager-879f6c89f-rnpnm\" (UID: \"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876393 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876416 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876436 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15cb22b1-04c0-45b6-81fa-9cb976a1aecb-config\") pod \"console-operator-58897d9998-j27ls\" (UID: \"15cb22b1-04c0-45b6-81fa-9cb976a1aecb\") " pod="openshift-console-operator/console-operator-58897d9998-j27ls" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876457 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f5f016d4-304f-4e8b-b0d8-9445bd44f6d2-encryption-config\") pod \"apiserver-7bbb656c7d-cwrr7\" (UID: \"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876478 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913bebf0-c7cd-40f4-b429-fe18368c8076-config\") pod \"machine-api-operator-5694c8668f-5rbww\" (UID: \"913bebf0-c7cd-40f4-b429-fe18368c8076\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5rbww" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876501 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e802aa0a-cd13-43df-be69-40b0bca7200f-trusted-ca-bundle\") pod \"console-f9d7485db-wzsch\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876524 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f5f016d4-304f-4e8b-b0d8-9445bd44f6d2-audit-policies\") pod \"apiserver-7bbb656c7d-cwrr7\" (UID: \"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876544 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876564 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876584 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-489d6\" (UniqueName: \"kubernetes.io/projected/384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e-kube-api-access-489d6\") pod \"route-controller-manager-6576b87f9c-zchg5\" (UID: \"384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876606 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8de1c076-eee2-4e97-9916-3ff159867471-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lq76q\" (UID: \"8de1c076-eee2-4e97-9916-3ff159867471\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq76q" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876627 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876680 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8de1c076-eee2-4e97-9916-3ff159867471-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lq76q\" (UID: \"8de1c076-eee2-4e97-9916-3ff159867471\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq76q" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876705 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876727 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5n56\" (UniqueName: \"kubernetes.io/projected/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-kube-api-access-l5n56\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876749 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btvjw\" (UniqueName: \"kubernetes.io/projected/ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51-kube-api-access-btvjw\") pod \"downloads-7954f5f757-9g789\" (UID: \"ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51\") " pod="openshift-console/downloads-7954f5f757-9g789" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876768 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/913bebf0-c7cd-40f4-b429-fe18368c8076-images\") pod \"machine-api-operator-5694c8668f-5rbww\" (UID: \"913bebf0-c7cd-40f4-b429-fe18368c8076\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5rbww" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876788 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e802aa0a-cd13-43df-be69-40b0bca7200f-console-config\") pod \"console-f9d7485db-wzsch\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876809 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b88f4c63-eec1-4f9c-97d9-5d0b1eae95a1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-l8slc\" (UID: \"b88f4c63-eec1-4f9c-97d9-5d0b1eae95a1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8slc" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876829 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rmlh\" (UniqueName: \"kubernetes.io/projected/4351e984-b1eb-4ffc-96b8-5e37536b79da-kube-api-access-6rmlh\") pod \"cluster-samples-operator-665b6dd947-bmbbn\" (UID: \"4351e984-b1eb-4ffc-96b8-5e37536b79da\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bmbbn" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876848 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwtj8\" (UniqueName: \"kubernetes.io/projected/0e640ad8-265d-4c39-976e-3e772057d0d0-kube-api-access-gwtj8\") pod \"authentication-operator-69f744f599-xmzfs\" (UID: \"0e640ad8-265d-4c39-976e-3e772057d0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmzfs" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876869 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wlmt\" (UniqueName: \"kubernetes.io/projected/e802aa0a-cd13-43df-be69-40b0bca7200f-kube-api-access-8wlmt\") pod \"console-f9d7485db-wzsch\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876890 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb7k9\" (UniqueName: \"kubernetes.io/projected/f5f016d4-304f-4e8b-b0d8-9445bd44f6d2-kube-api-access-mb7k9\") pod \"apiserver-7bbb656c7d-cwrr7\" (UID: \"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876923 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4v7n\" (UniqueName: \"kubernetes.io/projected/8de1c076-eee2-4e97-9916-3ff159867471-kube-api-access-g4v7n\") pod \"openshift-apiserver-operator-796bbdcf4f-lq76q\" (UID: \"8de1c076-eee2-4e97-9916-3ff159867471\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq76q" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876943 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.876968 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15cb22b1-04c0-45b6-81fa-9cb976a1aecb-serving-cert\") pod \"console-operator-58897d9998-j27ls\" (UID: \"15cb22b1-04c0-45b6-81fa-9cb976a1aecb\") " pod="openshift-console-operator/console-operator-58897d9998-j27ls" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.877001 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5f016d4-304f-4e8b-b0d8-9445bd44f6d2-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-cwrr7\" (UID: \"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.877021 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e802aa0a-cd13-43df-be69-40b0bca7200f-service-ca\") pod \"console-f9d7485db-wzsch\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.877055 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.877076 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rnpnm\" (UID: \"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.877098 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfhsm\" (UniqueName: \"kubernetes.io/projected/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-kube-api-access-vfhsm\") pod \"controller-manager-879f6c89f-rnpnm\" (UID: \"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.877118 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e640ad8-265d-4c39-976e-3e772057d0d0-service-ca-bundle\") pod \"authentication-operator-69f744f599-xmzfs\" (UID: \"0e640ad8-265d-4c39-976e-3e772057d0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmzfs" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.877140 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e640ad8-265d-4c39-976e-3e772057d0d0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xmzfs\" (UID: \"0e640ad8-265d-4c39-976e-3e772057d0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmzfs" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.877161 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.877179 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnzqc\" (UniqueName: \"kubernetes.io/projected/15cb22b1-04c0-45b6-81fa-9cb976a1aecb-kube-api-access-fnzqc\") pod \"console-operator-58897d9998-j27ls\" (UID: \"15cb22b1-04c0-45b6-81fa-9cb976a1aecb\") " pod="openshift-console-operator/console-operator-58897d9998-j27ls" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.877201 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f5f016d4-304f-4e8b-b0d8-9445bd44f6d2-etcd-client\") pod \"apiserver-7bbb656c7d-cwrr7\" (UID: \"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.877223 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.877248 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7df56bd6-4c2c-4432-b312-51019bf0f458-serving-cert\") pod \"openshift-config-operator-7777fb866f-96764\" (UID: \"7df56bd6-4c2c-4432-b312-51019bf0f458\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-96764" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.877270 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7df56bd6-4c2c-4432-b312-51019bf0f458-available-featuregates\") pod \"openshift-config-operator-7777fb866f-96764\" (UID: \"7df56bd6-4c2c-4432-b312-51019bf0f458\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-96764" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.877291 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b88f4c63-eec1-4f9c-97d9-5d0b1eae95a1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-l8slc\" (UID: \"b88f4c63-eec1-4f9c-97d9-5d0b1eae95a1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8slc" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.877312 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e640ad8-265d-4c39-976e-3e772057d0d0-config\") pod \"authentication-operator-69f744f599-xmzfs\" (UID: \"0e640ad8-265d-4c39-976e-3e772057d0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmzfs" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.877333 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.877354 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5f016d4-304f-4e8b-b0d8-9445bd44f6d2-serving-cert\") pod \"apiserver-7bbb656c7d-cwrr7\" (UID: \"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.877378 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e802aa0a-cd13-43df-be69-40b0bca7200f-console-oauth-config\") pod \"console-f9d7485db-wzsch\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.877402 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e-config\") pod \"route-controller-manager-6576b87f9c-zchg5\" (UID: \"384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.877781 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.878486 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.878706 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.878938 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.879056 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.879177 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.879276 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.879351 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.879387 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.879500 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.879579 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.879648 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.879816 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.881935 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cphfr"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.882623 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cphfr" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.882649 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rq4rk"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.883743 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rq4rk" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.882827 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.883445 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.883510 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.885019 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.885134 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.885287 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.885436 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.885556 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.885725 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.885771 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.885825 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.886010 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.886187 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.886650 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.886811 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.886950 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.887066 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.887214 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.887321 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.887417 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.885730 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.888290 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.890443 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.896139 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.898631 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.899271 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.900440 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.902715 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.902983 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.903143 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.904499 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.904873 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rb79w"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.905353 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bf79s"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.905944 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bf79s" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.906193 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.906635 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.907489 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.909395 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v74ml"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.909906 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v74ml" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.912333 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cjkks"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.912899 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjkks" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.927203 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.929683 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2d7rr"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.930510 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-4rlt6"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.932428 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9bc9c"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.932616 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2d7rr" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.933294 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9bc9c" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.933395 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-4rlt6" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.935965 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6bzcc"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.937475 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6bzcc" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.944203 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.951789 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416695-czmn9"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.955436 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-czmn9" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.960370 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mghr2"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.961098 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4drs2"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.961443 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mghr2" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.962454 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vmw25"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.962643 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4drs2" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.963375 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnpns"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.963714 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vmw25" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.964465 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnpns" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.964941 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2cjj5"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.965611 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2cjj5" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.966120 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77z7z"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.966529 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77z7z" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.968887 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cjc4q"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.969486 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cjc4q" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.970080 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-h8lzb"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.970198 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.970872 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-h8lzb" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.971114 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-pr5dh"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.972508 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-pr5dh" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.975626 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4hh5t"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.977348 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bqbf4"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.978033 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bqbf4" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.978072 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4hh5t" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.978559 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e-config\") pod \"route-controller-manager-6576b87f9c-zchg5\" (UID: \"384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.978608 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b88f4c63-eec1-4f9c-97d9-5d0b1eae95a1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-l8slc\" (UID: \"b88f4c63-eec1-4f9c-97d9-5d0b1eae95a1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8slc" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.978633 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5f016d4-304f-4e8b-b0d8-9445bd44f6d2-audit-dir\") pod \"apiserver-7bbb656c7d-cwrr7\" (UID: \"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.978649 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e802aa0a-cd13-43df-be69-40b0bca7200f-oauth-serving-cert\") pod \"console-f9d7485db-wzsch\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.978732 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e-client-ca\") pod \"route-controller-manager-6576b87f9c-zchg5\" (UID: \"384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.978780 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-audit-dir\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.978801 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/913bebf0-c7cd-40f4-b429-fe18368c8076-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5rbww\" (UID: \"913bebf0-c7cd-40f4-b429-fe18368c8076\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5rbww" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.978865 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15cb22b1-04c0-45b6-81fa-9cb976a1aecb-trusted-ca\") pod \"console-operator-58897d9998-j27ls\" (UID: \"15cb22b1-04c0-45b6-81fa-9cb976a1aecb\") " pod="openshift-console-operator/console-operator-58897d9998-j27ls" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.978894 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pw6k\" (UniqueName: \"kubernetes.io/projected/7df56bd6-4c2c-4432-b312-51019bf0f458-kube-api-access-7pw6k\") pod \"openshift-config-operator-7777fb866f-96764\" (UID: \"7df56bd6-4c2c-4432-b312-51019bf0f458\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-96764" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.978912 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e-serving-cert\") pod \"route-controller-manager-6576b87f9c-zchg5\" (UID: \"384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.978927 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-client-ca\") pod \"controller-manager-879f6c89f-rnpnm\" (UID: \"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.978943 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv7lm\" (UniqueName: \"kubernetes.io/projected/b88f4c63-eec1-4f9c-97d9-5d0b1eae95a1-kube-api-access-bv7lm\") pod \"cluster-image-registry-operator-dc59b4c8b-l8slc\" (UID: \"b88f4c63-eec1-4f9c-97d9-5d0b1eae95a1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8slc" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.978957 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-config\") pod \"controller-manager-879f6c89f-rnpnm\" (UID: \"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.978974 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f5f016d4-304f-4e8b-b0d8-9445bd44f6d2-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-cwrr7\" (UID: \"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.978988 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-audit-policies\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979008 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e640ad8-265d-4c39-976e-3e772057d0d0-serving-cert\") pod \"authentication-operator-69f744f599-xmzfs\" (UID: \"0e640ad8-265d-4c39-976e-3e772057d0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmzfs" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979024 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjzvw\" (UniqueName: \"kubernetes.io/projected/913bebf0-c7cd-40f4-b429-fe18368c8076-kube-api-access-tjzvw\") pod \"machine-api-operator-5694c8668f-5rbww\" (UID: \"913bebf0-c7cd-40f4-b429-fe18368c8076\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5rbww" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979040 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e802aa0a-cd13-43df-be69-40b0bca7200f-console-serving-cert\") pod \"console-f9d7485db-wzsch\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979059 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4351e984-b1eb-4ffc-96b8-5e37536b79da-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bmbbn\" (UID: \"4351e984-b1eb-4ffc-96b8-5e37536b79da\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bmbbn" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979077 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-serving-cert\") pod \"controller-manager-879f6c89f-rnpnm\" (UID: \"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979097 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f7602e8b-a241-4125-aa10-3e9c5a2ca5bd-profile-collector-cert\") pod \"catalog-operator-68c6474976-v74ml\" (UID: \"f7602e8b-a241-4125-aa10-3e9c5a2ca5bd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v74ml" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979112 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979128 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979144 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15cb22b1-04c0-45b6-81fa-9cb976a1aecb-config\") pod \"console-operator-58897d9998-j27ls\" (UID: \"15cb22b1-04c0-45b6-81fa-9cb976a1aecb\") " pod="openshift-console-operator/console-operator-58897d9998-j27ls" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979175 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f5f016d4-304f-4e8b-b0d8-9445bd44f6d2-encryption-config\") pod \"apiserver-7bbb656c7d-cwrr7\" (UID: \"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979199 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913bebf0-c7cd-40f4-b429-fe18368c8076-config\") pod \"machine-api-operator-5694c8668f-5rbww\" (UID: \"913bebf0-c7cd-40f4-b429-fe18368c8076\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5rbww" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979222 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e802aa0a-cd13-43df-be69-40b0bca7200f-trusted-ca-bundle\") pod \"console-f9d7485db-wzsch\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979246 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f5f016d4-304f-4e8b-b0d8-9445bd44f6d2-audit-policies\") pod \"apiserver-7bbb656c7d-cwrr7\" (UID: \"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979271 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979292 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979308 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-489d6\" (UniqueName: \"kubernetes.io/projected/384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e-kube-api-access-489d6\") pod \"route-controller-manager-6576b87f9c-zchg5\" (UID: \"384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979326 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8de1c076-eee2-4e97-9916-3ff159867471-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lq76q\" (UID: \"8de1c076-eee2-4e97-9916-3ff159867471\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq76q" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979341 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979368 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8de1c076-eee2-4e97-9916-3ff159867471-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lq76q\" (UID: \"8de1c076-eee2-4e97-9916-3ff159867471\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq76q" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979384 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5n56\" (UniqueName: \"kubernetes.io/projected/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-kube-api-access-l5n56\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979402 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f7602e8b-a241-4125-aa10-3e9c5a2ca5bd-srv-cert\") pod \"catalog-operator-68c6474976-v74ml\" (UID: \"f7602e8b-a241-4125-aa10-3e9c5a2ca5bd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v74ml" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979419 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979438 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btvjw\" (UniqueName: \"kubernetes.io/projected/ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51-kube-api-access-btvjw\") pod \"downloads-7954f5f757-9g789\" (UID: \"ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51\") " pod="openshift-console/downloads-7954f5f757-9g789" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979464 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/913bebf0-c7cd-40f4-b429-fe18368c8076-images\") pod \"machine-api-operator-5694c8668f-5rbww\" (UID: \"913bebf0-c7cd-40f4-b429-fe18368c8076\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5rbww" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979481 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e802aa0a-cd13-43df-be69-40b0bca7200f-console-config\") pod \"console-f9d7485db-wzsch\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979495 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5f016d4-304f-4e8b-b0d8-9445bd44f6d2-audit-dir\") pod \"apiserver-7bbb656c7d-cwrr7\" (UID: \"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979497 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b88f4c63-eec1-4f9c-97d9-5d0b1eae95a1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-l8slc\" (UID: \"b88f4c63-eec1-4f9c-97d9-5d0b1eae95a1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8slc" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979557 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rmlh\" (UniqueName: \"kubernetes.io/projected/4351e984-b1eb-4ffc-96b8-5e37536b79da-kube-api-access-6rmlh\") pod \"cluster-samples-operator-665b6dd947-bmbbn\" (UID: \"4351e984-b1eb-4ffc-96b8-5e37536b79da\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bmbbn" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979578 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwtj8\" (UniqueName: \"kubernetes.io/projected/0e640ad8-265d-4c39-976e-3e772057d0d0-kube-api-access-gwtj8\") pod \"authentication-operator-69f744f599-xmzfs\" (UID: \"0e640ad8-265d-4c39-976e-3e772057d0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmzfs" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979595 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wlmt\" (UniqueName: \"kubernetes.io/projected/e802aa0a-cd13-43df-be69-40b0bca7200f-kube-api-access-8wlmt\") pod \"console-f9d7485db-wzsch\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979619 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncxbp\" (UniqueName: \"kubernetes.io/projected/f7602e8b-a241-4125-aa10-3e9c5a2ca5bd-kube-api-access-ncxbp\") pod \"catalog-operator-68c6474976-v74ml\" (UID: \"f7602e8b-a241-4125-aa10-3e9c5a2ca5bd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v74ml" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979639 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb7k9\" (UniqueName: \"kubernetes.io/projected/f5f016d4-304f-4e8b-b0d8-9445bd44f6d2-kube-api-access-mb7k9\") pod \"apiserver-7bbb656c7d-cwrr7\" (UID: \"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979685 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4v7n\" (UniqueName: \"kubernetes.io/projected/8de1c076-eee2-4e97-9916-3ff159867471-kube-api-access-g4v7n\") pod \"openshift-apiserver-operator-796bbdcf4f-lq76q\" (UID: \"8de1c076-eee2-4e97-9916-3ff159867471\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq76q" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979703 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979719 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15cb22b1-04c0-45b6-81fa-9cb976a1aecb-serving-cert\") pod \"console-operator-58897d9998-j27ls\" (UID: \"15cb22b1-04c0-45b6-81fa-9cb976a1aecb\") " pod="openshift-console-operator/console-operator-58897d9998-j27ls" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979751 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5f016d4-304f-4e8b-b0d8-9445bd44f6d2-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-cwrr7\" (UID: \"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979766 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e802aa0a-cd13-43df-be69-40b0bca7200f-service-ca\") pod \"console-f9d7485db-wzsch\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979784 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979802 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rnpnm\" (UID: \"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979821 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfhsm\" (UniqueName: \"kubernetes.io/projected/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-kube-api-access-vfhsm\") pod \"controller-manager-879f6c89f-rnpnm\" (UID: \"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979839 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e640ad8-265d-4c39-976e-3e772057d0d0-service-ca-bundle\") pod \"authentication-operator-69f744f599-xmzfs\" (UID: \"0e640ad8-265d-4c39-976e-3e772057d0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmzfs" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979858 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e640ad8-265d-4c39-976e-3e772057d0d0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xmzfs\" (UID: \"0e640ad8-265d-4c39-976e-3e772057d0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmzfs" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979876 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979895 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnzqc\" (UniqueName: \"kubernetes.io/projected/15cb22b1-04c0-45b6-81fa-9cb976a1aecb-kube-api-access-fnzqc\") pod \"console-operator-58897d9998-j27ls\" (UID: \"15cb22b1-04c0-45b6-81fa-9cb976a1aecb\") " pod="openshift-console-operator/console-operator-58897d9998-j27ls" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979916 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f5f016d4-304f-4e8b-b0d8-9445bd44f6d2-etcd-client\") pod \"apiserver-7bbb656c7d-cwrr7\" (UID: \"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979934 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979954 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7df56bd6-4c2c-4432-b312-51019bf0f458-serving-cert\") pod \"openshift-config-operator-7777fb866f-96764\" (UID: \"7df56bd6-4c2c-4432-b312-51019bf0f458\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-96764" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979972 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7df56bd6-4c2c-4432-b312-51019bf0f458-available-featuregates\") pod \"openshift-config-operator-7777fb866f-96764\" (UID: \"7df56bd6-4c2c-4432-b312-51019bf0f458\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-96764" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979987 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b88f4c63-eec1-4f9c-97d9-5d0b1eae95a1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-l8slc\" (UID: \"b88f4c63-eec1-4f9c-97d9-5d0b1eae95a1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8slc" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.980004 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e640ad8-265d-4c39-976e-3e772057d0d0-config\") pod \"authentication-operator-69f744f599-xmzfs\" (UID: \"0e640ad8-265d-4c39-976e-3e772057d0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmzfs" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.980019 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.980039 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5f016d4-304f-4e8b-b0d8-9445bd44f6d2-serving-cert\") pod \"apiserver-7bbb656c7d-cwrr7\" (UID: \"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.980054 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e802aa0a-cd13-43df-be69-40b0bca7200f-console-oauth-config\") pod \"console-f9d7485db-wzsch\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.980486 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e-config\") pod \"route-controller-manager-6576b87f9c-zchg5\" (UID: \"384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.981100 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b88f4c63-eec1-4f9c-97d9-5d0b1eae95a1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-l8slc\" (UID: \"b88f4c63-eec1-4f9c-97d9-5d0b1eae95a1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8slc" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.982222 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15cb22b1-04c0-45b6-81fa-9cb976a1aecb-config\") pod \"console-operator-58897d9998-j27ls\" (UID: \"15cb22b1-04c0-45b6-81fa-9cb976a1aecb\") " pod="openshift-console-operator/console-operator-58897d9998-j27ls" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.982429 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.982995 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.983368 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e802aa0a-cd13-43df-be69-40b0bca7200f-oauth-serving-cert\") pod \"console-f9d7485db-wzsch\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.984033 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913bebf0-c7cd-40f4-b429-fe18368c8076-config\") pod \"machine-api-operator-5694c8668f-5rbww\" (UID: \"913bebf0-c7cd-40f4-b429-fe18368c8076\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5rbww" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.984584 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e802aa0a-cd13-43df-be69-40b0bca7200f-trusted-ca-bundle\") pod \"console-f9d7485db-wzsch\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.986342 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/913bebf0-c7cd-40f4-b429-fe18368c8076-images\") pod \"machine-api-operator-5694c8668f-5rbww\" (UID: \"913bebf0-c7cd-40f4-b429-fe18368c8076\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5rbww" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.986762 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.986950 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8de1c076-eee2-4e97-9916-3ff159867471-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lq76q\" (UID: \"8de1c076-eee2-4e97-9916-3ff159867471\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq76q" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.987360 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-audit-dir\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.987731 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f5f016d4-304f-4e8b-b0d8-9445bd44f6d2-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-cwrr7\" (UID: \"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.988380 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-audit-policies\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.988651 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5f016d4-304f-4e8b-b0d8-9445bd44f6d2-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-cwrr7\" (UID: \"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.988706 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e802aa0a-cd13-43df-be69-40b0bca7200f-console-config\") pod \"console-f9d7485db-wzsch\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.988822 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f5f016d4-304f-4e8b-b0d8-9445bd44f6d2-audit-policies\") pod \"apiserver-7bbb656c7d-cwrr7\" (UID: \"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.988995 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8de1c076-eee2-4e97-9916-3ff159867471-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lq76q\" (UID: \"8de1c076-eee2-4e97-9916-3ff159867471\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq76q" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.989264 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e802aa0a-cd13-43df-be69-40b0bca7200f-console-oauth-config\") pod \"console-f9d7485db-wzsch\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.989296 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e-client-ca\") pod \"route-controller-manager-6576b87f9c-zchg5\" (UID: \"384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.989557 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e802aa0a-cd13-43df-be69-40b0bca7200f-service-ca\") pod \"console-f9d7485db-wzsch\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.979780 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nsc26"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.991094 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5rbww"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.991126 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8slc"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.991124 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15cb22b1-04c0-45b6-81fa-9cb976a1aecb-trusted-ca\") pod \"console-operator-58897d9998-j27ls\" (UID: \"15cb22b1-04c0-45b6-81fa-9cb976a1aecb\") " pod="openshift-console-operator/console-operator-58897d9998-j27ls" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.991141 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-9g789"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.991225 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nsc26" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.992404 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq76q"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.992374 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.992559 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.992690 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.994278 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-serving-cert\") pod \"controller-manager-879f6c89f-rnpnm\" (UID: \"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.994449 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7df56bd6-4c2c-4432-b312-51019bf0f458-available-featuregates\") pod \"openshift-config-operator-7777fb866f-96764\" (UID: \"7df56bd6-4c2c-4432-b312-51019bf0f458\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-96764" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.994491 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e640ad8-265d-4c39-976e-3e772057d0d0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xmzfs\" (UID: \"0e640ad8-265d-4c39-976e-3e772057d0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmzfs" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.994686 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rnpnm"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.994850 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e640ad8-265d-4c39-976e-3e772057d0d0-config\") pod \"authentication-operator-69f744f599-xmzfs\" (UID: \"0e640ad8-265d-4c39-976e-3e772057d0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmzfs" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.995236 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.995521 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e640ad8-265d-4c39-976e-3e772057d0d0-service-ca-bundle\") pod \"authentication-operator-69f744f599-xmzfs\" (UID: \"0e640ad8-265d-4c39-976e-3e772057d0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmzfs" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.995593 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.996281 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.996327 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-96764"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.996603 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rnpnm\" (UID: \"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.997128 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qcghw"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.997440 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-config\") pod \"controller-manager-879f6c89f-rnpnm\" (UID: \"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.998341 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7"] Dec 06 06:27:24 crc kubenswrapper[4823]: I1206 06:27:24.999571 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-wzsch"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.000330 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7df56bd6-4c2c-4432-b312-51019bf0f458-serving-cert\") pod \"openshift-config-operator-7777fb866f-96764\" (UID: \"7df56bd6-4c2c-4432-b312-51019bf0f458\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-96764" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.000836 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cphfr"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.000852 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5f016d4-304f-4e8b-b0d8-9445bd44f6d2-serving-cert\") pod \"apiserver-7bbb656c7d-cwrr7\" (UID: \"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.001139 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15cb22b1-04c0-45b6-81fa-9cb976a1aecb-serving-cert\") pod \"console-operator-58897d9998-j27ls\" (UID: \"15cb22b1-04c0-45b6-81fa-9cb976a1aecb\") " pod="openshift-console-operator/console-operator-58897d9998-j27ls" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.002915 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/913bebf0-c7cd-40f4-b429-fe18368c8076-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5rbww\" (UID: \"913bebf0-c7cd-40f4-b429-fe18368c8076\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5rbww" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.003002 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.003194 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f5f016d4-304f-4e8b-b0d8-9445bd44f6d2-encryption-config\") pod \"apiserver-7bbb656c7d-cwrr7\" (UID: \"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.003232 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4351e984-b1eb-4ffc-96b8-5e37536b79da-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bmbbn\" (UID: \"4351e984-b1eb-4ffc-96b8-5e37536b79da\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bmbbn" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.003271 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.003296 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bf79s"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.003446 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.004622 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rq4rk"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.005453 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f5f016d4-304f-4e8b-b0d8-9445bd44f6d2-etcd-client\") pod \"apiserver-7bbb656c7d-cwrr7\" (UID: \"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.005514 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-nbvlv"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.005533 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.005953 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e640ad8-265d-4c39-976e-3e772057d0d0-serving-cert\") pod \"authentication-operator-69f744f599-xmzfs\" (UID: \"0e640ad8-265d-4c39-976e-3e772057d0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmzfs" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.006213 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-client-ca\") pod \"controller-manager-879f6c89f-rnpnm\" (UID: \"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.006874 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-24zhr"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.009728 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bmbbn"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.009868 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-24zhr" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.010469 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nbvlv" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.010540 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xmzfs"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.018593 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e802aa0a-cd13-43df-be69-40b0bca7200f-console-serving-cert\") pod \"console-f9d7485db-wzsch\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.019277 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cjkks"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.019359 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416695-czmn9"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.019374 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rb79w"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.022768 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vmw25"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.022957 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4drs2"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.046682 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b88f4c63-eec1-4f9c-97d9-5d0b1eae95a1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-l8slc\" (UID: \"b88f4c63-eec1-4f9c-97d9-5d0b1eae95a1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8slc" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.046757 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.047030 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e-serving-cert\") pod \"route-controller-manager-6576b87f9c-zchg5\" (UID: \"384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.047138 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-j27ls"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.047186 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6bzcc"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.053264 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fwj5n"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.056183 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mghr2"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.059457 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.059700 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v74ml"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.061008 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77z7z"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.062621 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.063974 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.065199 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nbvlv"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.066732 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-h8lzb"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.068186 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2d7rr"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.069526 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnpns"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.070682 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cjc4q"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.072143 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9bc9c"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.072954 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-cd5rf"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.073801 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cd5rf" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.074464 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-pr5dh"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.076251 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-txkg5"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.076881 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-txkg5" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.078902 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nsc26"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.080177 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2cjj5"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.081048 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f7602e8b-a241-4125-aa10-3e9c5a2ca5bd-profile-collector-cert\") pod \"catalog-operator-68c6474976-v74ml\" (UID: \"f7602e8b-a241-4125-aa10-3e9c5a2ca5bd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v74ml" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.081296 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f7602e8b-a241-4125-aa10-3e9c5a2ca5bd-srv-cert\") pod \"catalog-operator-68c6474976-v74ml\" (UID: \"f7602e8b-a241-4125-aa10-3e9c5a2ca5bd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v74ml" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.081465 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncxbp\" (UniqueName: \"kubernetes.io/projected/f7602e8b-a241-4125-aa10-3e9c5a2ca5bd-kube-api-access-ncxbp\") pod \"catalog-operator-68c6474976-v74ml\" (UID: \"f7602e8b-a241-4125-aa10-3e9c5a2ca5bd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v74ml" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.082644 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.082834 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-24zhr"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.084607 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bqbf4"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.088780 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4hh5t"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.089992 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cd5rf"] Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.107644 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.122481 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.142419 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.163191 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.182455 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.203099 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.210825 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.223827 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.235052 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f7602e8b-a241-4125-aa10-3e9c5a2ca5bd-srv-cert\") pod \"catalog-operator-68c6474976-v74ml\" (UID: \"f7602e8b-a241-4125-aa10-3e9c5a2ca5bd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v74ml" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.242965 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.255239 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f7602e8b-a241-4125-aa10-3e9c5a2ca5bd-profile-collector-cert\") pod \"catalog-operator-68c6474976-v74ml\" (UID: \"f7602e8b-a241-4125-aa10-3e9c5a2ca5bd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v74ml" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.262495 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.283088 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.302579 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.324068 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.343176 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.363483 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.403229 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.423039 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.442299 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.463061 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.482581 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.504106 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.522226 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.542906 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.563071 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.582467 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.603531 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.622447 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.647784 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.662707 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.682827 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.692487 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.692736 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.692825 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:27:25 crc kubenswrapper[4823]: E1206 06:27:25.692925 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:29:27.692895875 +0000 UTC m=+268.978647835 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.693056 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.693096 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.693696 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.696544 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.696901 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.697194 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.702368 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.722638 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.763297 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.782899 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.802856 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.822843 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.842628 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.862830 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.873336 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.882542 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.902570 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.922232 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.942241 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.952958 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.960827 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.962251 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.980997 4823 request.go:700] Waited for 1.015016689s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&limit=500&resourceVersion=0 Dec 06 06:27:25 crc kubenswrapper[4823]: I1206 06:27:25.983012 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.002914 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.023970 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.048637 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.062453 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 06 06:27:26 crc kubenswrapper[4823]: W1206 06:27:26.069846 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-1925b019d2c1404e11f2b7d6d417c7d24e2a37bd265f4a5cad352f9da223af25 WatchSource:0}: Error finding container 1925b019d2c1404e11f2b7d6d417c7d24e2a37bd265f4a5cad352f9da223af25: Status 404 returned error can't find the container with id 1925b019d2c1404e11f2b7d6d417c7d24e2a37bd265f4a5cad352f9da223af25 Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.082521 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.102500 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.128538 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.143926 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.162245 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 06 06:27:26 crc kubenswrapper[4823]: W1206 06:27:26.178609 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-402f3939dda715910b15af5a807c3ee26aad693ceb29e479897edfaceb63fc23 WatchSource:0}: Error finding container 402f3939dda715910b15af5a807c3ee26aad693ceb29e479897edfaceb63fc23: Status 404 returned error can't find the container with id 402f3939dda715910b15af5a807c3ee26aad693ceb29e479897edfaceb63fc23 Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.182518 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.202287 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.222650 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.242121 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.261900 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.281704 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.301881 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.322953 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.343388 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.363075 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.382808 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.403171 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.423148 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.442494 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.462200 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.482835 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.502931 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.540758 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjzvw\" (UniqueName: \"kubernetes.io/projected/913bebf0-c7cd-40f4-b429-fe18368c8076-kube-api-access-tjzvw\") pod \"machine-api-operator-5694c8668f-5rbww\" (UID: \"913bebf0-c7cd-40f4-b429-fe18368c8076\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5rbww" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.556033 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5n56\" (UniqueName: \"kubernetes.io/projected/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-kube-api-access-l5n56\") pod \"oauth-openshift-558db77b4-qcghw\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.580605 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-489d6\" (UniqueName: \"kubernetes.io/projected/384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e-kube-api-access-489d6\") pod \"route-controller-manager-6576b87f9c-zchg5\" (UID: \"384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.597354 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btvjw\" (UniqueName: \"kubernetes.io/projected/ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51-kube-api-access-btvjw\") pod \"downloads-7954f5f757-9g789\" (UID: \"ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51\") " pod="openshift-console/downloads-7954f5f757-9g789" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.620953 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pw6k\" (UniqueName: \"kubernetes.io/projected/7df56bd6-4c2c-4432-b312-51019bf0f458-kube-api-access-7pw6k\") pod \"openshift-config-operator-7777fb866f-96764\" (UID: \"7df56bd6-4c2c-4432-b312-51019bf0f458\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-96764" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.627023 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5rbww" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.647089 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnzqc\" (UniqueName: \"kubernetes.io/projected/15cb22b1-04c0-45b6-81fa-9cb976a1aecb-kube-api-access-fnzqc\") pod \"console-operator-58897d9998-j27ls\" (UID: \"15cb22b1-04c0-45b6-81fa-9cb976a1aecb\") " pod="openshift-console-operator/console-operator-58897d9998-j27ls" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.656864 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-9g789" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.657020 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b88f4c63-eec1-4f9c-97d9-5d0b1eae95a1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-l8slc\" (UID: \"b88f4c63-eec1-4f9c-97d9-5d0b1eae95a1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8slc" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.682344 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv7lm\" (UniqueName: \"kubernetes.io/projected/b88f4c63-eec1-4f9c-97d9-5d0b1eae95a1-kube-api-access-bv7lm\") pod \"cluster-image-registry-operator-dc59b4c8b-l8slc\" (UID: \"b88f4c63-eec1-4f9c-97d9-5d0b1eae95a1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8slc" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.705468 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wlmt\" (UniqueName: \"kubernetes.io/projected/e802aa0a-cd13-43df-be69-40b0bca7200f-kube-api-access-8wlmt\") pod \"console-f9d7485db-wzsch\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.717101 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rmlh\" (UniqueName: \"kubernetes.io/projected/4351e984-b1eb-4ffc-96b8-5e37536b79da-kube-api-access-6rmlh\") pod \"cluster-samples-operator-665b6dd947-bmbbn\" (UID: \"4351e984-b1eb-4ffc-96b8-5e37536b79da\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bmbbn" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.740227 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwtj8\" (UniqueName: \"kubernetes.io/projected/0e640ad8-265d-4c39-976e-3e772057d0d0-kube-api-access-gwtj8\") pod \"authentication-operator-69f744f599-xmzfs\" (UID: \"0e640ad8-265d-4c39-976e-3e772057d0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmzfs" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.747867 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.757400 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfhsm\" (UniqueName: \"kubernetes.io/projected/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-kube-api-access-vfhsm\") pod \"controller-manager-879f6c89f-rnpnm\" (UID: \"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.762461 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.766574 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-96764" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.775078 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-j27ls" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.786121 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.787799 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.803435 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5rbww"] Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.803474 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.823338 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.838254 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9be7c13c0db2170cf2341fca8c3d949caa026654663adbe1391763ee379fdb44"} Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.846042 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7642754bf5e021ac4aaa9f79e7bde2b46caf75fda3b7ee6ff3b6b2be9f2ec1be"} Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.846115 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"402f3939dda715910b15af5a807c3ee26aad693ceb29e479897edfaceb63fc23"} Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.849654 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"541a7fbc0a8fa9b0b96f9372d9cb17a456f238c414428fb7b6d0bb5d7e292125"} Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.849734 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1925b019d2c1404e11f2b7d6d417c7d24e2a37bd265f4a5cad352f9da223af25"} Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.855544 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.865019 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bmbbn" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.869444 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb7k9\" (UniqueName: \"kubernetes.io/projected/f5f016d4-304f-4e8b-b0d8-9445bd44f6d2-kube-api-access-mb7k9\") pod \"apiserver-7bbb656c7d-cwrr7\" (UID: \"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.874476 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.882480 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4v7n\" (UniqueName: \"kubernetes.io/projected/8de1c076-eee2-4e97-9916-3ff159867471-kube-api-access-g4v7n\") pod \"openshift-apiserver-operator-796bbdcf4f-lq76q\" (UID: \"8de1c076-eee2-4e97-9916-3ff159867471\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq76q" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.882854 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.913691 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-9g789"] Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.925180 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.931593 4823 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.943866 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8slc" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.944475 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.963917 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.983698 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 06 06:27:26 crc kubenswrapper[4823]: I1206 06:27:26.985006 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-xmzfs" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.001562 4823 request.go:700] Waited for 1.927531657s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&limit=500&resourceVersion=0 Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.003143 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.024877 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5"] Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.028748 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.031869 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq76q" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.043647 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.060638 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.063358 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.083391 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.089324 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qcghw"] Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.103082 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.104352 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-96764"] Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.123424 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 06 06:27:27 crc kubenswrapper[4823]: W1206 06:27:27.145488 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7df56bd6_4c2c_4432_b312_51019bf0f458.slice/crio-ccc801f36e2437abf2cafa83ff354c1412d7524cafeeea55cfa5898c7fc72ae1 WatchSource:0}: Error finding container ccc801f36e2437abf2cafa83ff354c1412d7524cafeeea55cfa5898c7fc72ae1: Status 404 returned error can't find the container with id ccc801f36e2437abf2cafa83ff354c1412d7524cafeeea55cfa5898c7fc72ae1 Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.163265 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncxbp\" (UniqueName: \"kubernetes.io/projected/f7602e8b-a241-4125-aa10-3e9c5a2ca5bd-kube-api-access-ncxbp\") pod \"catalog-operator-68c6474976-v74ml\" (UID: \"f7602e8b-a241-4125-aa10-3e9c5a2ca5bd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v74ml" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.213883 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a0d23be-9267-406c-a67e-6970f6e8b922-audit-dir\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.214260 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdglc\" (UniqueName: \"kubernetes.io/projected/6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e-kube-api-access-jdglc\") pod \"router-default-5444994796-4rlt6\" (UID: \"6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e\") " pod="openshift-ingress/router-default-5444994796-4rlt6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.214310 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4051e58f-39a6-4ec5-a171-c5c700d4576a-etcd-ca\") pod \"etcd-operator-b45778765-2d7rr\" (UID: \"4051e58f-39a6-4ec5-a171-c5c700d4576a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2d7rr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.214339 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/872f6308-0267-41cd-bc92-e401f9d7cda9-trusted-ca\") pod \"ingress-operator-5b745b69d9-bf79s\" (UID: \"872f6308-0267-41cd-bc92-e401f9d7cda9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bf79s" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.214374 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d48ns\" (UniqueName: \"kubernetes.io/projected/3a0d23be-9267-406c-a67e-6970f6e8b922-kube-api-access-d48ns\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.214394 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3f369975-6444-44d7-b85f-290ec604b172-registry-tls\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.214438 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0483a63-0788-40f1-9d6e-6f3377195729-config\") pod \"machine-approver-56656f9798-qtfn6\" (UID: \"b0483a63-0788-40f1-9d6e-6f3377195729\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qtfn6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.214483 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b0483a63-0788-40f1-9d6e-6f3377195729-auth-proxy-config\") pod \"machine-approver-56656f9798-qtfn6\" (UID: \"b0483a63-0788-40f1-9d6e-6f3377195729\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qtfn6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.214507 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a0d23be-9267-406c-a67e-6970f6e8b922-encryption-config\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.214553 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45d8cc05-e612-40f5-a5dd-57d7abbadc51-metrics-tls\") pod \"dns-operator-744455d44c-rq4rk\" (UID: \"45d8cc05-e612-40f5-a5dd-57d7abbadc51\") " pod="openshift-dns-operator/dns-operator-744455d44c-rq4rk" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.214585 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd84q\" (UniqueName: \"kubernetes.io/projected/2e10270a-5495-4824-b934-523a90d07dca-kube-api-access-fd84q\") pod \"openshift-controller-manager-operator-756b6f6bc6-cphfr\" (UID: \"2e10270a-5495-4824-b934-523a90d07dca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cphfr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.214622 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3a0d23be-9267-406c-a67e-6970f6e8b922-image-import-ca\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.214640 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/872f6308-0267-41cd-bc92-e401f9d7cda9-metrics-tls\") pod \"ingress-operator-5b745b69d9-bf79s\" (UID: \"872f6308-0267-41cd-bc92-e401f9d7cda9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bf79s" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.214703 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e10270a-5495-4824-b934-523a90d07dca-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cphfr\" (UID: \"2e10270a-5495-4824-b934-523a90d07dca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cphfr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.214742 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e10270a-5495-4824-b934-523a90d07dca-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cphfr\" (UID: \"2e10270a-5495-4824-b934-523a90d07dca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cphfr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.214794 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3a0d23be-9267-406c-a67e-6970f6e8b922-node-pullsecrets\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.214937 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3f369975-6444-44d7-b85f-290ec604b172-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.214965 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a0d23be-9267-406c-a67e-6970f6e8b922-config\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.214987 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4051e58f-39a6-4ec5-a171-c5c700d4576a-etcd-service-ca\") pod \"etcd-operator-b45778765-2d7rr\" (UID: \"4051e58f-39a6-4ec5-a171-c5c700d4576a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2d7rr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.215008 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b0483a63-0788-40f1-9d6e-6f3377195729-machine-approver-tls\") pod \"machine-approver-56656f9798-qtfn6\" (UID: \"b0483a63-0788-40f1-9d6e-6f3377195729\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qtfn6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.215024 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a0d23be-9267-406c-a67e-6970f6e8b922-serving-cert\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.215055 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a0d23be-9267-406c-a67e-6970f6e8b922-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.215071 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg4p9\" (UniqueName: \"kubernetes.io/projected/45d8cc05-e612-40f5-a5dd-57d7abbadc51-kube-api-access-zg4p9\") pod \"dns-operator-744455d44c-rq4rk\" (UID: \"45d8cc05-e612-40f5-a5dd-57d7abbadc51\") " pod="openshift-dns-operator/dns-operator-744455d44c-rq4rk" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.215088 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvzxn\" (UniqueName: \"kubernetes.io/projected/2c06adc8-f875-4eaf-929f-e703320771d1-kube-api-access-zvzxn\") pod \"service-ca-operator-777779d784-cjkks\" (UID: \"2c06adc8-f875-4eaf-929f-e703320771d1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjkks" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.215122 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f369975-6444-44d7-b85f-290ec604b172-trusted-ca\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.215163 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/872f6308-0267-41cd-bc92-e401f9d7cda9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bf79s\" (UID: \"872f6308-0267-41cd-bc92-e401f9d7cda9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bf79s" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.215203 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.215237 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3f369975-6444-44d7-b85f-290ec604b172-registry-certificates\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.215253 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4051e58f-39a6-4ec5-a171-c5c700d4576a-config\") pod \"etcd-operator-b45778765-2d7rr\" (UID: \"4051e58f-39a6-4ec5-a171-c5c700d4576a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2d7rr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.215269 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3f369975-6444-44d7-b85f-290ec604b172-bound-sa-token\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.215303 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e-default-certificate\") pod \"router-default-5444994796-4rlt6\" (UID: \"6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e\") " pod="openshift-ingress/router-default-5444994796-4rlt6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.215318 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkw9s\" (UniqueName: \"kubernetes.io/projected/b0483a63-0788-40f1-9d6e-6f3377195729-kube-api-access-rkw9s\") pod \"machine-approver-56656f9798-qtfn6\" (UID: \"b0483a63-0788-40f1-9d6e-6f3377195729\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qtfn6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.215333 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6d8v\" (UniqueName: \"kubernetes.io/projected/4051e58f-39a6-4ec5-a171-c5c700d4576a-kube-api-access-d6d8v\") pod \"etcd-operator-b45778765-2d7rr\" (UID: \"4051e58f-39a6-4ec5-a171-c5c700d4576a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2d7rr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.215368 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a0d23be-9267-406c-a67e-6970f6e8b922-etcd-serving-ca\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.215396 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4051e58f-39a6-4ec5-a171-c5c700d4576a-serving-cert\") pod \"etcd-operator-b45778765-2d7rr\" (UID: \"4051e58f-39a6-4ec5-a171-c5c700d4576a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2d7rr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.215439 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3f369975-6444-44d7-b85f-290ec604b172-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.215468 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e-metrics-certs\") pod \"router-default-5444994796-4rlt6\" (UID: \"6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e\") " pod="openshift-ingress/router-default-5444994796-4rlt6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.215516 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3a0d23be-9267-406c-a67e-6970f6e8b922-audit\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.215577 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4051e58f-39a6-4ec5-a171-c5c700d4576a-etcd-client\") pod \"etcd-operator-b45778765-2d7rr\" (UID: \"4051e58f-39a6-4ec5-a171-c5c700d4576a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2d7rr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.215622 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e-stats-auth\") pod \"router-default-5444994796-4rlt6\" (UID: \"6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e\") " pod="openshift-ingress/router-default-5444994796-4rlt6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.215647 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c06adc8-f875-4eaf-929f-e703320771d1-config\") pod \"service-ca-operator-777779d784-cjkks\" (UID: \"2c06adc8-f875-4eaf-929f-e703320771d1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjkks" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.216879 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7mh4\" (UniqueName: \"kubernetes.io/projected/3f369975-6444-44d7-b85f-290ec604b172-kube-api-access-f7mh4\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.216903 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e-service-ca-bundle\") pod \"router-default-5444994796-4rlt6\" (UID: \"6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e\") " pod="openshift-ingress/router-default-5444994796-4rlt6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.217269 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vq8l\" (UniqueName: \"kubernetes.io/projected/872f6308-0267-41cd-bc92-e401f9d7cda9-kube-api-access-9vq8l\") pod \"ingress-operator-5b745b69d9-bf79s\" (UID: \"872f6308-0267-41cd-bc92-e401f9d7cda9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bf79s" Dec 06 06:27:27 crc kubenswrapper[4823]: E1206 06:27:27.218959 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:27.718945259 +0000 UTC m=+149.004697219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.220113 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c06adc8-f875-4eaf-929f-e703320771d1-serving-cert\") pod \"service-ca-operator-777779d784-cjkks\" (UID: \"2c06adc8-f875-4eaf-929f-e703320771d1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjkks" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.220544 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a0d23be-9267-406c-a67e-6970f6e8b922-etcd-client\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.243322 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v74ml" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.313309 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rnpnm"] Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.324502 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:27 crc kubenswrapper[4823]: E1206 06:27:27.324709 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:27.824653068 +0000 UTC m=+149.110405028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.324808 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3a0d23be-9267-406c-a67e-6970f6e8b922-image-import-ca\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.324840 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/872f6308-0267-41cd-bc92-e401f9d7cda9-metrics-tls\") pod \"ingress-operator-5b745b69d9-bf79s\" (UID: \"872f6308-0267-41cd-bc92-e401f9d7cda9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bf79s" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.324879 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sck8d\" (UniqueName: \"kubernetes.io/projected/7768dedd-2688-4975-ac80-cc98b354e7a5-kube-api-access-sck8d\") pod \"dns-default-nbvlv\" (UID: \"7768dedd-2688-4975-ac80-cc98b354e7a5\") " pod="openshift-dns/dns-default-nbvlv" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.324907 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbz7w\" (UniqueName: \"kubernetes.io/projected/2f7a2cf2-a349-4491-88b1-c46345bc28a7-kube-api-access-cbz7w\") pod \"migrator-59844c95c7-4hh5t\" (UID: \"2f7a2cf2-a349-4491-88b1-c46345bc28a7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4hh5t" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.324927 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3a0d23be-9267-406c-a67e-6970f6e8b922-node-pullsecrets\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.324948 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dfe3972b-7772-422d-8e45-fcf803a7d302-proxy-tls\") pod \"machine-config-operator-74547568cd-4drs2\" (UID: \"dfe3972b-7772-422d-8e45-fcf803a7d302\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4drs2" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.324970 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9nld\" (UniqueName: \"kubernetes.io/projected/b8b36d44-6eab-4f81-bd7c-a0887b7ba1bc-kube-api-access-v9nld\") pod \"control-plane-machine-set-operator-78cbb6b69f-xnpns\" (UID: \"b8b36d44-6eab-4f81-bd7c-a0887b7ba1bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnpns" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.325013 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7768dedd-2688-4975-ac80-cc98b354e7a5-metrics-tls\") pod \"dns-default-nbvlv\" (UID: \"7768dedd-2688-4975-ac80-cc98b354e7a5\") " pod="openshift-dns/dns-default-nbvlv" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.325032 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg4p9\" (UniqueName: \"kubernetes.io/projected/45d8cc05-e612-40f5-a5dd-57d7abbadc51-kube-api-access-zg4p9\") pod \"dns-operator-744455d44c-rq4rk\" (UID: \"45d8cc05-e612-40f5-a5dd-57d7abbadc51\") " pod="openshift-dns-operator/dns-operator-744455d44c-rq4rk" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.325051 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/96b2e694-650a-4367-966b-4b53e8313c65-srv-cert\") pod \"olm-operator-6b444d44fb-9bc9c\" (UID: \"96b2e694-650a-4367-966b-4b53e8313c65\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9bc9c" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.325068 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3167c5-aeb3-4429-aca5-068eb77856f2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bqbf4\" (UID: \"1a3167c5-aeb3-4429-aca5-068eb77856f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bqbf4" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.325058 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3a0d23be-9267-406c-a67e-6970f6e8b922-node-pullsecrets\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.325086 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc6d436c-d1d0-40c8-b30a-86f201725949-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nsc26\" (UID: \"fc6d436c-d1d0-40c8-b30a-86f201725949\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nsc26" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.325155 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a587710-6059-48a3-a8c4-321a46b85508-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-77z7z\" (UID: \"9a587710-6059-48a3-a8c4-321a46b85508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77z7z" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.325320 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f369975-6444-44d7-b85f-290ec604b172-trusted-ca\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.325345 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zz2v\" (UniqueName: \"kubernetes.io/projected/27d936b5-b671-4d17-b9ee-bff849246c5a-kube-api-access-4zz2v\") pod \"csi-hostpathplugin-24zhr\" (UID: \"27d936b5-b671-4d17-b9ee-bff849246c5a\") " pod="hostpath-provisioner/csi-hostpathplugin-24zhr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.325409 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4051e58f-39a6-4ec5-a171-c5c700d4576a-config\") pod \"etcd-operator-b45778765-2d7rr\" (UID: \"4051e58f-39a6-4ec5-a171-c5c700d4576a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2d7rr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.325429 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/96b2e694-650a-4367-966b-4b53e8313c65-profile-collector-cert\") pod \"olm-operator-6b444d44fb-9bc9c\" (UID: \"96b2e694-650a-4367-966b-4b53e8313c65\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9bc9c" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.325472 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkw9s\" (UniqueName: \"kubernetes.io/projected/b0483a63-0788-40f1-9d6e-6f3377195729-kube-api-access-rkw9s\") pod \"machine-approver-56656f9798-qtfn6\" (UID: \"b0483a63-0788-40f1-9d6e-6f3377195729\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qtfn6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.325876 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4051e58f-39a6-4ec5-a171-c5c700d4576a-serving-cert\") pod \"etcd-operator-b45778765-2d7rr\" (UID: \"4051e58f-39a6-4ec5-a171-c5c700d4576a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2d7rr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.325898 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3a0d23be-9267-406c-a67e-6970f6e8b922-image-import-ca\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.325964 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rshlz\" (UniqueName: \"kubernetes.io/projected/dfe3972b-7772-422d-8e45-fcf803a7d302-kube-api-access-rshlz\") pod \"machine-config-operator-74547568cd-4drs2\" (UID: \"dfe3972b-7772-422d-8e45-fcf803a7d302\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4drs2" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.326057 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e-metrics-certs\") pod \"router-default-5444994796-4rlt6\" (UID: \"6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e\") " pod="openshift-ingress/router-default-5444994796-4rlt6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.326077 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3a0d23be-9267-406c-a67e-6970f6e8b922-audit\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.326117 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/27d936b5-b671-4d17-b9ee-bff849246c5a-plugins-dir\") pod \"csi-hostpathplugin-24zhr\" (UID: \"27d936b5-b671-4d17-b9ee-bff849246c5a\") " pod="hostpath-provisioner/csi-hostpathplugin-24zhr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.326136 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9chr\" (UniqueName: \"kubernetes.io/projected/66a38981-0dd2-4411-897f-4289cce13349-kube-api-access-t9chr\") pod \"multus-admission-controller-857f4d67dd-pr5dh\" (UID: \"66a38981-0dd2-4411-897f-4289cce13349\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pr5dh" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.326461 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4051e58f-39a6-4ec5-a171-c5c700d4576a-config\") pod \"etcd-operator-b45778765-2d7rr\" (UID: \"4051e58f-39a6-4ec5-a171-c5c700d4576a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2d7rr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.326551 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4051e58f-39a6-4ec5-a171-c5c700d4576a-etcd-client\") pod \"etcd-operator-b45778765-2d7rr\" (UID: \"4051e58f-39a6-4ec5-a171-c5c700d4576a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2d7rr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.326582 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/27d936b5-b671-4d17-b9ee-bff849246c5a-registration-dir\") pod \"csi-hostpathplugin-24zhr\" (UID: \"27d936b5-b671-4d17-b9ee-bff849246c5a\") " pod="hostpath-provisioner/csi-hostpathplugin-24zhr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.326598 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rhkr\" (UniqueName: \"kubernetes.io/projected/96b2e694-650a-4367-966b-4b53e8313c65-kube-api-access-6rhkr\") pod \"olm-operator-6b444d44fb-9bc9c\" (UID: \"96b2e694-650a-4367-966b-4b53e8313c65\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9bc9c" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.326619 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7768dedd-2688-4975-ac80-cc98b354e7a5-config-volume\") pod \"dns-default-nbvlv\" (UID: \"7768dedd-2688-4975-ac80-cc98b354e7a5\") " pod="openshift-dns/dns-default-nbvlv" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.326689 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c06adc8-f875-4eaf-929f-e703320771d1-config\") pod \"service-ca-operator-777779d784-cjkks\" (UID: \"2c06adc8-f875-4eaf-929f-e703320771d1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjkks" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.326711 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/53c6447b-e10a-4b50-86d2-9cd52e4233a4-node-bootstrap-token\") pod \"machine-config-server-txkg5\" (UID: \"53c6447b-e10a-4b50-86d2-9cd52e4233a4\") " pod="openshift-machine-config-operator/machine-config-server-txkg5" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.326754 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vq8l\" (UniqueName: \"kubernetes.io/projected/872f6308-0267-41cd-bc92-e401f9d7cda9-kube-api-access-9vq8l\") pod \"ingress-operator-5b745b69d9-bf79s\" (UID: \"872f6308-0267-41cd-bc92-e401f9d7cda9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bf79s" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.326824 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/69358a8e-999e-4fe5-881a-5868db35885c-cert\") pod \"ingress-canary-cd5rf\" (UID: \"69358a8e-999e-4fe5-881a-5868db35885c\") " pod="openshift-ingress-canary/ingress-canary-cd5rf" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.326841 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/87cf77eb-9dc0-43ee-a48b-78a38378b0f1-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vmw25\" (UID: \"87cf77eb-9dc0-43ee-a48b-78a38378b0f1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vmw25" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.326861 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a0d23be-9267-406c-a67e-6970f6e8b922-etcd-client\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.326878 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/54f75ed5-6c32-4667-8209-bbf6ffb81043-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6bzcc\" (UID: \"54f75ed5-6c32-4667-8209-bbf6ffb81043\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6bzcc" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.327065 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3a0d23be-9267-406c-a67e-6970f6e8b922-audit\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.327494 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95a0336c-e3d8-4290-a0bb-62d7a7f357ba-config-volume\") pod \"collect-profiles-29416695-czmn9\" (UID: \"95a0336c-e3d8-4290-a0bb-62d7a7f357ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-czmn9" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.327527 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc4w8\" (UniqueName: \"kubernetes.io/projected/53c6447b-e10a-4b50-86d2-9cd52e4233a4-kube-api-access-tc4w8\") pod \"machine-config-server-txkg5\" (UID: \"53c6447b-e10a-4b50-86d2-9cd52e4233a4\") " pod="openshift-machine-config-operator/machine-config-server-txkg5" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.327561 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/66a38981-0dd2-4411-897f-4289cce13349-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-pr5dh\" (UID: \"66a38981-0dd2-4411-897f-4289cce13349\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pr5dh" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.327584 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9865b222-071e-470e-8731-aed26f1bce5b-config\") pod \"kube-controller-manager-operator-78b949d7b-cjc4q\" (UID: \"9865b222-071e-470e-8731-aed26f1bce5b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cjc4q" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.327882 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c06adc8-f875-4eaf-929f-e703320771d1-config\") pod \"service-ca-operator-777779d784-cjkks\" (UID: \"2c06adc8-f875-4eaf-929f-e703320771d1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjkks" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.328277 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9e474bbe-7252-4213-be37-0916c3c6c1c0-webhook-cert\") pod \"packageserver-d55dfcdfc-mghr2\" (UID: \"9e474bbe-7252-4213-be37-0916c3c6c1c0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mghr2" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.328423 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdglc\" (UniqueName: \"kubernetes.io/projected/6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e-kube-api-access-jdglc\") pod \"router-default-5444994796-4rlt6\" (UID: \"6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e\") " pod="openshift-ingress/router-default-5444994796-4rlt6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.328502 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4051e58f-39a6-4ec5-a171-c5c700d4576a-etcd-ca\") pod \"etcd-operator-b45778765-2d7rr\" (UID: \"4051e58f-39a6-4ec5-a171-c5c700d4576a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2d7rr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.328542 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv6j9\" (UniqueName: \"kubernetes.io/projected/87cf77eb-9dc0-43ee-a48b-78a38378b0f1-kube-api-access-cv6j9\") pod \"machine-config-controller-84d6567774-vmw25\" (UID: \"87cf77eb-9dc0-43ee-a48b-78a38378b0f1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vmw25" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.328581 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dfe3972b-7772-422d-8e45-fcf803a7d302-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4drs2\" (UID: \"dfe3972b-7772-422d-8e45-fcf803a7d302\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4drs2" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.328609 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3f369975-6444-44d7-b85f-290ec604b172-registry-tls\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.328629 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a587710-6059-48a3-a8c4-321a46b85508-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-77z7z\" (UID: \"9a587710-6059-48a3-a8c4-321a46b85508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77z7z" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.328904 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a0d23be-9267-406c-a67e-6970f6e8b922-encryption-config\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.329344 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd84q\" (UniqueName: \"kubernetes.io/projected/2e10270a-5495-4824-b934-523a90d07dca-kube-api-access-fd84q\") pod \"openshift-controller-manager-operator-756b6f6bc6-cphfr\" (UID: \"2e10270a-5495-4824-b934-523a90d07dca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cphfr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.329643 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4051e58f-39a6-4ec5-a171-c5c700d4576a-etcd-ca\") pod \"etcd-operator-b45778765-2d7rr\" (UID: \"4051e58f-39a6-4ec5-a171-c5c700d4576a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2d7rr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.329765 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzntn\" (UniqueName: \"kubernetes.io/projected/69358a8e-999e-4fe5-881a-5868db35885c-kube-api-access-kzntn\") pod \"ingress-canary-cd5rf\" (UID: \"69358a8e-999e-4fe5-881a-5868db35885c\") " pod="openshift-ingress-canary/ingress-canary-cd5rf" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.329810 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e10270a-5495-4824-b934-523a90d07dca-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cphfr\" (UID: \"2e10270a-5495-4824-b934-523a90d07dca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cphfr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.329838 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgfcf\" (UniqueName: \"kubernetes.io/projected/13d85d66-b028-416f-83be-0235701d9b1c-kube-api-access-zgfcf\") pod \"service-ca-9c57cc56f-h8lzb\" (UID: \"13d85d66-b028-416f-83be-0235701d9b1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-h8lzb" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.329880 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e10270a-5495-4824-b934-523a90d07dca-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cphfr\" (UID: \"2e10270a-5495-4824-b934-523a90d07dca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cphfr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.329911 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53a18f23-f29e-43ba-8568-855cb4550b7b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2cjj5\" (UID: \"53a18f23-f29e-43ba-8568-855cb4550b7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-2cjj5" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.329941 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9e474bbe-7252-4213-be37-0916c3c6c1c0-apiservice-cert\") pod \"packageserver-d55dfcdfc-mghr2\" (UID: \"9e474bbe-7252-4213-be37-0916c3c6c1c0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mghr2" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.329965 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4051e58f-39a6-4ec5-a171-c5c700d4576a-etcd-service-ca\") pod \"etcd-operator-b45778765-2d7rr\" (UID: \"4051e58f-39a6-4ec5-a171-c5c700d4576a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2d7rr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.329995 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8b36d44-6eab-4f81-bd7c-a0887b7ba1bc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xnpns\" (UID: \"b8b36d44-6eab-4f81-bd7c-a0887b7ba1bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnpns" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.330028 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3f369975-6444-44d7-b85f-290ec604b172-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.330055 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a0d23be-9267-406c-a67e-6970f6e8b922-config\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.330080 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b0483a63-0788-40f1-9d6e-6f3377195729-machine-approver-tls\") pod \"machine-approver-56656f9798-qtfn6\" (UID: \"b0483a63-0788-40f1-9d6e-6f3377195729\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qtfn6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.330105 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a0d23be-9267-406c-a67e-6970f6e8b922-serving-cert\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.330127 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a0d23be-9267-406c-a67e-6970f6e8b922-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.330173 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvzxn\" (UniqueName: \"kubernetes.io/projected/2c06adc8-f875-4eaf-929f-e703320771d1-kube-api-access-zvzxn\") pod \"service-ca-operator-777779d784-cjkks\" (UID: \"2c06adc8-f875-4eaf-929f-e703320771d1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjkks" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.330219 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/872f6308-0267-41cd-bc92-e401f9d7cda9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bf79s\" (UID: \"872f6308-0267-41cd-bc92-e401f9d7cda9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bf79s" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.330253 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.330279 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqsf2\" (UniqueName: \"kubernetes.io/projected/9e474bbe-7252-4213-be37-0916c3c6c1c0-kube-api-access-cqsf2\") pod \"packageserver-d55dfcdfc-mghr2\" (UID: \"9e474bbe-7252-4213-be37-0916c3c6c1c0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mghr2" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.330304 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9865b222-071e-470e-8731-aed26f1bce5b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-cjc4q\" (UID: \"9865b222-071e-470e-8731-aed26f1bce5b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cjc4q" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.330325 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/13d85d66-b028-416f-83be-0235701d9b1c-signing-cabundle\") pod \"service-ca-9c57cc56f-h8lzb\" (UID: \"13d85d66-b028-416f-83be-0235701d9b1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-h8lzb" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.330357 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3f369975-6444-44d7-b85f-290ec604b172-registry-certificates\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.330383 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6d8v\" (UniqueName: \"kubernetes.io/projected/4051e58f-39a6-4ec5-a171-c5c700d4576a-kube-api-access-d6d8v\") pod \"etcd-operator-b45778765-2d7rr\" (UID: \"4051e58f-39a6-4ec5-a171-c5c700d4576a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2d7rr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.330410 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/27d936b5-b671-4d17-b9ee-bff849246c5a-csi-data-dir\") pod \"csi-hostpathplugin-24zhr\" (UID: \"27d936b5-b671-4d17-b9ee-bff849246c5a\") " pod="hostpath-provisioner/csi-hostpathplugin-24zhr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.330439 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3f369975-6444-44d7-b85f-290ec604b172-bound-sa-token\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.330463 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e-default-certificate\") pod \"router-default-5444994796-4rlt6\" (UID: \"6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e\") " pod="openshift-ingress/router-default-5444994796-4rlt6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.330491 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a0d23be-9267-406c-a67e-6970f6e8b922-etcd-serving-ca\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.330517 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw6vq\" (UniqueName: \"kubernetes.io/projected/95a0336c-e3d8-4290-a0bb-62d7a7f357ba-kube-api-access-vw6vq\") pod \"collect-profiles-29416695-czmn9\" (UID: \"95a0336c-e3d8-4290-a0bb-62d7a7f357ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-czmn9" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.330539 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/53c6447b-e10a-4b50-86d2-9cd52e4233a4-certs\") pod \"machine-config-server-txkg5\" (UID: \"53c6447b-e10a-4b50-86d2-9cd52e4233a4\") " pod="openshift-machine-config-operator/machine-config-server-txkg5" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.330629 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/87cf77eb-9dc0-43ee-a48b-78a38378b0f1-proxy-tls\") pod \"machine-config-controller-84d6567774-vmw25\" (UID: \"87cf77eb-9dc0-43ee-a48b-78a38378b0f1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vmw25" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.330698 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4051e58f-39a6-4ec5-a171-c5c700d4576a-etcd-service-ca\") pod \"etcd-operator-b45778765-2d7rr\" (UID: \"4051e58f-39a6-4ec5-a171-c5c700d4576a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2d7rr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.330726 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3f369975-6444-44d7-b85f-290ec604b172-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.330756 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mv9q\" (UniqueName: \"kubernetes.io/projected/9a587710-6059-48a3-a8c4-321a46b85508-kube-api-access-9mv9q\") pod \"kube-storage-version-migrator-operator-b67b599dd-77z7z\" (UID: \"9a587710-6059-48a3-a8c4-321a46b85508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77z7z" Dec 06 06:27:27 crc kubenswrapper[4823]: E1206 06:27:27.331058 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:27.831033902 +0000 UTC m=+149.116785862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.331954 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3f369975-6444-44d7-b85f-290ec604b172-registry-certificates\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.331958 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a0d23be-9267-406c-a67e-6970f6e8b922-etcd-serving-ca\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.332384 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e10270a-5495-4824-b934-523a90d07dca-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cphfr\" (UID: \"2e10270a-5495-4824-b934-523a90d07dca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cphfr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.332652 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/13d85d66-b028-416f-83be-0235701d9b1c-signing-key\") pod \"service-ca-9c57cc56f-h8lzb\" (UID: \"13d85d66-b028-416f-83be-0235701d9b1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-h8lzb" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.332960 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a0d23be-9267-406c-a67e-6970f6e8b922-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.332977 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4051e58f-39a6-4ec5-a171-c5c700d4576a-etcd-client\") pod \"etcd-operator-b45778765-2d7rr\" (UID: \"4051e58f-39a6-4ec5-a171-c5c700d4576a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2d7rr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.333855 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e10270a-5495-4824-b934-523a90d07dca-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cphfr\" (UID: \"2e10270a-5495-4824-b934-523a90d07dca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cphfr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.334393 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4051e58f-39a6-4ec5-a171-c5c700d4576a-serving-cert\") pod \"etcd-operator-b45778765-2d7rr\" (UID: \"4051e58f-39a6-4ec5-a171-c5c700d4576a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2d7rr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.334453 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/872f6308-0267-41cd-bc92-e401f9d7cda9-metrics-tls\") pod \"ingress-operator-5b745b69d9-bf79s\" (UID: \"872f6308-0267-41cd-bc92-e401f9d7cda9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bf79s" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.334811 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e-stats-auth\") pod \"router-default-5444994796-4rlt6\" (UID: \"6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e\") " pod="openshift-ingress/router-default-5444994796-4rlt6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.334829 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3f369975-6444-44d7-b85f-290ec604b172-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.334853 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7mh4\" (UniqueName: \"kubernetes.io/projected/3f369975-6444-44d7-b85f-290ec604b172-kube-api-access-f7mh4\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.335367 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f369975-6444-44d7-b85f-290ec604b172-trusted-ca\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.336188 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e-service-ca-bundle\") pod \"router-default-5444994796-4rlt6\" (UID: \"6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e\") " pod="openshift-ingress/router-default-5444994796-4rlt6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.336219 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/27d936b5-b671-4d17-b9ee-bff849246c5a-socket-dir\") pod \"csi-hostpathplugin-24zhr\" (UID: \"27d936b5-b671-4d17-b9ee-bff849246c5a\") " pod="hostpath-provisioner/csi-hostpathplugin-24zhr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.348235 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wxd6\" (UniqueName: \"kubernetes.io/projected/54f75ed5-6c32-4667-8209-bbf6ffb81043-kube-api-access-7wxd6\") pod \"package-server-manager-789f6589d5-6bzcc\" (UID: \"54f75ed5-6c32-4667-8209-bbf6ffb81043\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6bzcc" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.353604 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e-default-certificate\") pod \"router-default-5444994796-4rlt6\" (UID: \"6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e\") " pod="openshift-ingress/router-default-5444994796-4rlt6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.354277 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3f369975-6444-44d7-b85f-290ec604b172-registry-tls\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.354503 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e-stats-auth\") pod \"router-default-5444994796-4rlt6\" (UID: \"6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e\") " pod="openshift-ingress/router-default-5444994796-4rlt6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.354947 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b0483a63-0788-40f1-9d6e-6f3377195729-machine-approver-tls\") pod \"machine-approver-56656f9798-qtfn6\" (UID: \"b0483a63-0788-40f1-9d6e-6f3377195729\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qtfn6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.355634 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e-metrics-certs\") pod \"router-default-5444994796-4rlt6\" (UID: \"6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e\") " pod="openshift-ingress/router-default-5444994796-4rlt6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.356495 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c06adc8-f875-4eaf-929f-e703320771d1-serving-cert\") pod \"service-ca-operator-777779d784-cjkks\" (UID: \"2c06adc8-f875-4eaf-929f-e703320771d1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjkks" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.357529 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3f369975-6444-44d7-b85f-290ec604b172-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.357545 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e-service-ca-bundle\") pod \"router-default-5444994796-4rlt6\" (UID: \"6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e\") " pod="openshift-ingress/router-default-5444994796-4rlt6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.357806 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a3167c5-aeb3-4429-aca5-068eb77856f2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bqbf4\" (UID: \"1a3167c5-aeb3-4429-aca5-068eb77856f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bqbf4" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.357848 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9865b222-071e-470e-8731-aed26f1bce5b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-cjc4q\" (UID: \"9865b222-071e-470e-8731-aed26f1bce5b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cjc4q" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.357893 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc6d436c-d1d0-40c8-b30a-86f201725949-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nsc26\" (UID: \"fc6d436c-d1d0-40c8-b30a-86f201725949\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nsc26" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.357924 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a0d23be-9267-406c-a67e-6970f6e8b922-audit-dir\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.358209 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a0d23be-9267-406c-a67e-6970f6e8b922-audit-dir\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.358254 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/27d936b5-b671-4d17-b9ee-bff849246c5a-mountpoint-dir\") pod \"csi-hostpathplugin-24zhr\" (UID: \"27d936b5-b671-4d17-b9ee-bff849246c5a\") " pod="hostpath-provisioner/csi-hostpathplugin-24zhr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.358305 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95a0336c-e3d8-4290-a0bb-62d7a7f357ba-secret-volume\") pod \"collect-profiles-29416695-czmn9\" (UID: \"95a0336c-e3d8-4290-a0bb-62d7a7f357ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-czmn9" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.358372 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/872f6308-0267-41cd-bc92-e401f9d7cda9-trusted-ca\") pod \"ingress-operator-5b745b69d9-bf79s\" (UID: \"872f6308-0267-41cd-bc92-e401f9d7cda9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bf79s" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.358429 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc6d436c-d1d0-40c8-b30a-86f201725949-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nsc26\" (UID: \"fc6d436c-d1d0-40c8-b30a-86f201725949\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nsc26" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.360257 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c06adc8-f875-4eaf-929f-e703320771d1-serving-cert\") pod \"service-ca-operator-777779d784-cjkks\" (UID: \"2c06adc8-f875-4eaf-929f-e703320771d1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjkks" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.360400 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d48ns\" (UniqueName: \"kubernetes.io/projected/3a0d23be-9267-406c-a67e-6970f6e8b922-kube-api-access-d48ns\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.360482 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0483a63-0788-40f1-9d6e-6f3377195729-config\") pod \"machine-approver-56656f9798-qtfn6\" (UID: \"b0483a63-0788-40f1-9d6e-6f3377195729\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qtfn6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.360571 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cpg6\" (UniqueName: \"kubernetes.io/projected/53a18f23-f29e-43ba-8568-855cb4550b7b-kube-api-access-7cpg6\") pod \"marketplace-operator-79b997595-2cjj5\" (UID: \"53a18f23-f29e-43ba-8568-855cb4550b7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-2cjj5" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.360607 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3167c5-aeb3-4429-aca5-068eb77856f2-config\") pod \"kube-apiserver-operator-766d6c64bb-bqbf4\" (UID: \"1a3167c5-aeb3-4429-aca5-068eb77856f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bqbf4" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.360638 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/53a18f23-f29e-43ba-8568-855cb4550b7b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2cjj5\" (UID: \"53a18f23-f29e-43ba-8568-855cb4550b7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-2cjj5" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.360683 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9e474bbe-7252-4213-be37-0916c3c6c1c0-tmpfs\") pod \"packageserver-d55dfcdfc-mghr2\" (UID: \"9e474bbe-7252-4213-be37-0916c3c6c1c0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mghr2" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.360744 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dfe3972b-7772-422d-8e45-fcf803a7d302-images\") pod \"machine-config-operator-74547568cd-4drs2\" (UID: \"dfe3972b-7772-422d-8e45-fcf803a7d302\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4drs2" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.360783 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b0483a63-0788-40f1-9d6e-6f3377195729-auth-proxy-config\") pod \"machine-approver-56656f9798-qtfn6\" (UID: \"b0483a63-0788-40f1-9d6e-6f3377195729\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qtfn6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.360828 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45d8cc05-e612-40f5-a5dd-57d7abbadc51-metrics-tls\") pod \"dns-operator-744455d44c-rq4rk\" (UID: \"45d8cc05-e612-40f5-a5dd-57d7abbadc51\") " pod="openshift-dns-operator/dns-operator-744455d44c-rq4rk" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.361546 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0483a63-0788-40f1-9d6e-6f3377195729-config\") pod \"machine-approver-56656f9798-qtfn6\" (UID: \"b0483a63-0788-40f1-9d6e-6f3377195729\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qtfn6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.363749 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45d8cc05-e612-40f5-a5dd-57d7abbadc51-metrics-tls\") pod \"dns-operator-744455d44c-rq4rk\" (UID: \"45d8cc05-e612-40f5-a5dd-57d7abbadc51\") " pod="openshift-dns-operator/dns-operator-744455d44c-rq4rk" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.370493 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg4p9\" (UniqueName: \"kubernetes.io/projected/45d8cc05-e612-40f5-a5dd-57d7abbadc51-kube-api-access-zg4p9\") pod \"dns-operator-744455d44c-rq4rk\" (UID: \"45d8cc05-e612-40f5-a5dd-57d7abbadc51\") " pod="openshift-dns-operator/dns-operator-744455d44c-rq4rk" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.378198 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkw9s\" (UniqueName: \"kubernetes.io/projected/b0483a63-0788-40f1-9d6e-6f3377195729-kube-api-access-rkw9s\") pod \"machine-approver-56656f9798-qtfn6\" (UID: \"b0483a63-0788-40f1-9d6e-6f3377195729\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qtfn6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.393037 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a0d23be-9267-406c-a67e-6970f6e8b922-config\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.393081 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a0d23be-9267-406c-a67e-6970f6e8b922-encryption-config\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.393124 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a0d23be-9267-406c-a67e-6970f6e8b922-etcd-client\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.394559 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b0483a63-0788-40f1-9d6e-6f3377195729-auth-proxy-config\") pod \"machine-approver-56656f9798-qtfn6\" (UID: \"b0483a63-0788-40f1-9d6e-6f3377195729\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qtfn6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.395030 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a0d23be-9267-406c-a67e-6970f6e8b922-serving-cert\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.396176 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/872f6308-0267-41cd-bc92-e401f9d7cda9-trusted-ca\") pod \"ingress-operator-5b745b69d9-bf79s\" (UID: \"872f6308-0267-41cd-bc92-e401f9d7cda9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bf79s" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.403501 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vq8l\" (UniqueName: \"kubernetes.io/projected/872f6308-0267-41cd-bc92-e401f9d7cda9-kube-api-access-9vq8l\") pod \"ingress-operator-5b745b69d9-bf79s\" (UID: \"872f6308-0267-41cd-bc92-e401f9d7cda9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bf79s" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.410512 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-wzsch"] Dec 06 06:27:27 crc kubenswrapper[4823]: W1206 06:27:27.421398 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a8b35ec_e034_4e77_9fa9_9b62b0cbd82f.slice/crio-3ae501e5cba20fed10fa6d283353c6e0feebbdea80bd6e85efe7ccde6ecd5303 WatchSource:0}: Error finding container 3ae501e5cba20fed10fa6d283353c6e0feebbdea80bd6e85efe7ccde6ecd5303: Status 404 returned error can't find the container with id 3ae501e5cba20fed10fa6d283353c6e0feebbdea80bd6e85efe7ccde6ecd5303 Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.426889 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-j27ls"] Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.438412 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdglc\" (UniqueName: \"kubernetes.io/projected/6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e-kube-api-access-jdglc\") pod \"router-default-5444994796-4rlt6\" (UID: \"6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e\") " pod="openshift-ingress/router-default-5444994796-4rlt6" Dec 06 06:27:27 crc kubenswrapper[4823]: W1206 06:27:27.440363 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode802aa0a_cd13_43df_be69_40b0bca7200f.slice/crio-1d87b4ed24bc0cd85b3c61eca48f485fbdb095433efed6a335a8f4c27aad7936 WatchSource:0}: Error finding container 1d87b4ed24bc0cd85b3c61eca48f485fbdb095433efed6a335a8f4c27aad7936: Status 404 returned error can't find the container with id 1d87b4ed24bc0cd85b3c61eca48f485fbdb095433efed6a335a8f4c27aad7936 Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.458456 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd84q\" (UniqueName: \"kubernetes.io/projected/2e10270a-5495-4824-b934-523a90d07dca-kube-api-access-fd84q\") pod \"openshift-controller-manager-operator-756b6f6bc6-cphfr\" (UID: \"2e10270a-5495-4824-b934-523a90d07dca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cphfr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.461829 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.461975 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sck8d\" (UniqueName: \"kubernetes.io/projected/7768dedd-2688-4975-ac80-cc98b354e7a5-kube-api-access-sck8d\") pod \"dns-default-nbvlv\" (UID: \"7768dedd-2688-4975-ac80-cc98b354e7a5\") " pod="openshift-dns/dns-default-nbvlv" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.461997 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbz7w\" (UniqueName: \"kubernetes.io/projected/2f7a2cf2-a349-4491-88b1-c46345bc28a7-kube-api-access-cbz7w\") pod \"migrator-59844c95c7-4hh5t\" (UID: \"2f7a2cf2-a349-4491-88b1-c46345bc28a7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4hh5t" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462018 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dfe3972b-7772-422d-8e45-fcf803a7d302-proxy-tls\") pod \"machine-config-operator-74547568cd-4drs2\" (UID: \"dfe3972b-7772-422d-8e45-fcf803a7d302\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4drs2" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462035 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9nld\" (UniqueName: \"kubernetes.io/projected/b8b36d44-6eab-4f81-bd7c-a0887b7ba1bc-kube-api-access-v9nld\") pod \"control-plane-machine-set-operator-78cbb6b69f-xnpns\" (UID: \"b8b36d44-6eab-4f81-bd7c-a0887b7ba1bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnpns" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462056 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/96b2e694-650a-4367-966b-4b53e8313c65-srv-cert\") pod \"olm-operator-6b444d44fb-9bc9c\" (UID: \"96b2e694-650a-4367-966b-4b53e8313c65\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9bc9c" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462070 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3167c5-aeb3-4429-aca5-068eb77856f2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bqbf4\" (UID: \"1a3167c5-aeb3-4429-aca5-068eb77856f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bqbf4" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462085 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc6d436c-d1d0-40c8-b30a-86f201725949-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nsc26\" (UID: \"fc6d436c-d1d0-40c8-b30a-86f201725949\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nsc26" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462101 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7768dedd-2688-4975-ac80-cc98b354e7a5-metrics-tls\") pod \"dns-default-nbvlv\" (UID: \"7768dedd-2688-4975-ac80-cc98b354e7a5\") " pod="openshift-dns/dns-default-nbvlv" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462117 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a587710-6059-48a3-a8c4-321a46b85508-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-77z7z\" (UID: \"9a587710-6059-48a3-a8c4-321a46b85508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77z7z" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462135 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zz2v\" (UniqueName: \"kubernetes.io/projected/27d936b5-b671-4d17-b9ee-bff849246c5a-kube-api-access-4zz2v\") pod \"csi-hostpathplugin-24zhr\" (UID: \"27d936b5-b671-4d17-b9ee-bff849246c5a\") " pod="hostpath-provisioner/csi-hostpathplugin-24zhr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462161 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/96b2e694-650a-4367-966b-4b53e8313c65-profile-collector-cert\") pod \"olm-operator-6b444d44fb-9bc9c\" (UID: \"96b2e694-650a-4367-966b-4b53e8313c65\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9bc9c" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462183 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rshlz\" (UniqueName: \"kubernetes.io/projected/dfe3972b-7772-422d-8e45-fcf803a7d302-kube-api-access-rshlz\") pod \"machine-config-operator-74547568cd-4drs2\" (UID: \"dfe3972b-7772-422d-8e45-fcf803a7d302\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4drs2" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462204 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/27d936b5-b671-4d17-b9ee-bff849246c5a-plugins-dir\") pod \"csi-hostpathplugin-24zhr\" (UID: \"27d936b5-b671-4d17-b9ee-bff849246c5a\") " pod="hostpath-provisioner/csi-hostpathplugin-24zhr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462221 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9chr\" (UniqueName: \"kubernetes.io/projected/66a38981-0dd2-4411-897f-4289cce13349-kube-api-access-t9chr\") pod \"multus-admission-controller-857f4d67dd-pr5dh\" (UID: \"66a38981-0dd2-4411-897f-4289cce13349\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pr5dh" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462238 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/27d936b5-b671-4d17-b9ee-bff849246c5a-registration-dir\") pod \"csi-hostpathplugin-24zhr\" (UID: \"27d936b5-b671-4d17-b9ee-bff849246c5a\") " pod="hostpath-provisioner/csi-hostpathplugin-24zhr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462255 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rhkr\" (UniqueName: \"kubernetes.io/projected/96b2e694-650a-4367-966b-4b53e8313c65-kube-api-access-6rhkr\") pod \"olm-operator-6b444d44fb-9bc9c\" (UID: \"96b2e694-650a-4367-966b-4b53e8313c65\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9bc9c" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462271 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7768dedd-2688-4975-ac80-cc98b354e7a5-config-volume\") pod \"dns-default-nbvlv\" (UID: \"7768dedd-2688-4975-ac80-cc98b354e7a5\") " pod="openshift-dns/dns-default-nbvlv" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462289 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/53c6447b-e10a-4b50-86d2-9cd52e4233a4-node-bootstrap-token\") pod \"machine-config-server-txkg5\" (UID: \"53c6447b-e10a-4b50-86d2-9cd52e4233a4\") " pod="openshift-machine-config-operator/machine-config-server-txkg5" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462307 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/69358a8e-999e-4fe5-881a-5868db35885c-cert\") pod \"ingress-canary-cd5rf\" (UID: \"69358a8e-999e-4fe5-881a-5868db35885c\") " pod="openshift-ingress-canary/ingress-canary-cd5rf" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462326 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/54f75ed5-6c32-4667-8209-bbf6ffb81043-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6bzcc\" (UID: \"54f75ed5-6c32-4667-8209-bbf6ffb81043\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6bzcc" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462344 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/87cf77eb-9dc0-43ee-a48b-78a38378b0f1-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vmw25\" (UID: \"87cf77eb-9dc0-43ee-a48b-78a38378b0f1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vmw25" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462363 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95a0336c-e3d8-4290-a0bb-62d7a7f357ba-config-volume\") pod \"collect-profiles-29416695-czmn9\" (UID: \"95a0336c-e3d8-4290-a0bb-62d7a7f357ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-czmn9" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462382 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc4w8\" (UniqueName: \"kubernetes.io/projected/53c6447b-e10a-4b50-86d2-9cd52e4233a4-kube-api-access-tc4w8\") pod \"machine-config-server-txkg5\" (UID: \"53c6447b-e10a-4b50-86d2-9cd52e4233a4\") " pod="openshift-machine-config-operator/machine-config-server-txkg5" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462398 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/66a38981-0dd2-4411-897f-4289cce13349-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-pr5dh\" (UID: \"66a38981-0dd2-4411-897f-4289cce13349\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pr5dh" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462414 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9865b222-071e-470e-8731-aed26f1bce5b-config\") pod \"kube-controller-manager-operator-78b949d7b-cjc4q\" (UID: \"9865b222-071e-470e-8731-aed26f1bce5b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cjc4q" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462431 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9e474bbe-7252-4213-be37-0916c3c6c1c0-webhook-cert\") pod \"packageserver-d55dfcdfc-mghr2\" (UID: \"9e474bbe-7252-4213-be37-0916c3c6c1c0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mghr2" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462449 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv6j9\" (UniqueName: \"kubernetes.io/projected/87cf77eb-9dc0-43ee-a48b-78a38378b0f1-kube-api-access-cv6j9\") pod \"machine-config-controller-84d6567774-vmw25\" (UID: \"87cf77eb-9dc0-43ee-a48b-78a38378b0f1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vmw25" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462466 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dfe3972b-7772-422d-8e45-fcf803a7d302-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4drs2\" (UID: \"dfe3972b-7772-422d-8e45-fcf803a7d302\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4drs2" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462485 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a587710-6059-48a3-a8c4-321a46b85508-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-77z7z\" (UID: \"9a587710-6059-48a3-a8c4-321a46b85508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77z7z" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462502 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzntn\" (UniqueName: \"kubernetes.io/projected/69358a8e-999e-4fe5-881a-5868db35885c-kube-api-access-kzntn\") pod \"ingress-canary-cd5rf\" (UID: \"69358a8e-999e-4fe5-881a-5868db35885c\") " pod="openshift-ingress-canary/ingress-canary-cd5rf" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462521 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgfcf\" (UniqueName: \"kubernetes.io/projected/13d85d66-b028-416f-83be-0235701d9b1c-kube-api-access-zgfcf\") pod \"service-ca-9c57cc56f-h8lzb\" (UID: \"13d85d66-b028-416f-83be-0235701d9b1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-h8lzb" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462539 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53a18f23-f29e-43ba-8568-855cb4550b7b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2cjj5\" (UID: \"53a18f23-f29e-43ba-8568-855cb4550b7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-2cjj5" Dec 06 06:27:27 crc kubenswrapper[4823]: E1206 06:27:27.462573 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:27.962550302 +0000 UTC m=+149.248302332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462611 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9e474bbe-7252-4213-be37-0916c3c6c1c0-apiservice-cert\") pod \"packageserver-d55dfcdfc-mghr2\" (UID: \"9e474bbe-7252-4213-be37-0916c3c6c1c0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mghr2" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462649 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8b36d44-6eab-4f81-bd7c-a0887b7ba1bc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xnpns\" (UID: \"b8b36d44-6eab-4f81-bd7c-a0887b7ba1bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnpns" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462717 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462741 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqsf2\" (UniqueName: \"kubernetes.io/projected/9e474bbe-7252-4213-be37-0916c3c6c1c0-kube-api-access-cqsf2\") pod \"packageserver-d55dfcdfc-mghr2\" (UID: \"9e474bbe-7252-4213-be37-0916c3c6c1c0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mghr2" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462760 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9865b222-071e-470e-8731-aed26f1bce5b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-cjc4q\" (UID: \"9865b222-071e-470e-8731-aed26f1bce5b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cjc4q" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462783 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/13d85d66-b028-416f-83be-0235701d9b1c-signing-cabundle\") pod \"service-ca-9c57cc56f-h8lzb\" (UID: \"13d85d66-b028-416f-83be-0235701d9b1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-h8lzb" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462844 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/27d936b5-b671-4d17-b9ee-bff849246c5a-csi-data-dir\") pod \"csi-hostpathplugin-24zhr\" (UID: \"27d936b5-b671-4d17-b9ee-bff849246c5a\") " pod="hostpath-provisioner/csi-hostpathplugin-24zhr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462870 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw6vq\" (UniqueName: \"kubernetes.io/projected/95a0336c-e3d8-4290-a0bb-62d7a7f357ba-kube-api-access-vw6vq\") pod \"collect-profiles-29416695-czmn9\" (UID: \"95a0336c-e3d8-4290-a0bb-62d7a7f357ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-czmn9" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462888 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/53c6447b-e10a-4b50-86d2-9cd52e4233a4-certs\") pod \"machine-config-server-txkg5\" (UID: \"53c6447b-e10a-4b50-86d2-9cd52e4233a4\") " pod="openshift-machine-config-operator/machine-config-server-txkg5" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462916 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mv9q\" (UniqueName: \"kubernetes.io/projected/9a587710-6059-48a3-a8c4-321a46b85508-kube-api-access-9mv9q\") pod \"kube-storage-version-migrator-operator-b67b599dd-77z7z\" (UID: \"9a587710-6059-48a3-a8c4-321a46b85508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77z7z" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462935 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/87cf77eb-9dc0-43ee-a48b-78a38378b0f1-proxy-tls\") pod \"machine-config-controller-84d6567774-vmw25\" (UID: \"87cf77eb-9dc0-43ee-a48b-78a38378b0f1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vmw25" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462960 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/13d85d66-b028-416f-83be-0235701d9b1c-signing-key\") pod \"service-ca-9c57cc56f-h8lzb\" (UID: \"13d85d66-b028-416f-83be-0235701d9b1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-h8lzb" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462988 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/27d936b5-b671-4d17-b9ee-bff849246c5a-socket-dir\") pod \"csi-hostpathplugin-24zhr\" (UID: \"27d936b5-b671-4d17-b9ee-bff849246c5a\") " pod="hostpath-provisioner/csi-hostpathplugin-24zhr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.463011 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wxd6\" (UniqueName: \"kubernetes.io/projected/54f75ed5-6c32-4667-8209-bbf6ffb81043-kube-api-access-7wxd6\") pod \"package-server-manager-789f6589d5-6bzcc\" (UID: \"54f75ed5-6c32-4667-8209-bbf6ffb81043\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6bzcc" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.463040 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a3167c5-aeb3-4429-aca5-068eb77856f2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bqbf4\" (UID: \"1a3167c5-aeb3-4429-aca5-068eb77856f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bqbf4" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.463059 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9865b222-071e-470e-8731-aed26f1bce5b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-cjc4q\" (UID: \"9865b222-071e-470e-8731-aed26f1bce5b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cjc4q" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.463077 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc6d436c-d1d0-40c8-b30a-86f201725949-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nsc26\" (UID: \"fc6d436c-d1d0-40c8-b30a-86f201725949\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nsc26" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.463099 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/27d936b5-b671-4d17-b9ee-bff849246c5a-mountpoint-dir\") pod \"csi-hostpathplugin-24zhr\" (UID: \"27d936b5-b671-4d17-b9ee-bff849246c5a\") " pod="hostpath-provisioner/csi-hostpathplugin-24zhr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.463115 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95a0336c-e3d8-4290-a0bb-62d7a7f357ba-secret-volume\") pod \"collect-profiles-29416695-czmn9\" (UID: \"95a0336c-e3d8-4290-a0bb-62d7a7f357ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-czmn9" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.463136 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc6d436c-d1d0-40c8-b30a-86f201725949-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nsc26\" (UID: \"fc6d436c-d1d0-40c8-b30a-86f201725949\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nsc26" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.463167 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cpg6\" (UniqueName: \"kubernetes.io/projected/53a18f23-f29e-43ba-8568-855cb4550b7b-kube-api-access-7cpg6\") pod \"marketplace-operator-79b997595-2cjj5\" (UID: \"53a18f23-f29e-43ba-8568-855cb4550b7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-2cjj5" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.463187 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3167c5-aeb3-4429-aca5-068eb77856f2-config\") pod \"kube-apiserver-operator-766d6c64bb-bqbf4\" (UID: \"1a3167c5-aeb3-4429-aca5-068eb77856f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bqbf4" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.463208 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/53a18f23-f29e-43ba-8568-855cb4550b7b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2cjj5\" (UID: \"53a18f23-f29e-43ba-8568-855cb4550b7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-2cjj5" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.463226 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9e474bbe-7252-4213-be37-0916c3c6c1c0-tmpfs\") pod \"packageserver-d55dfcdfc-mghr2\" (UID: \"9e474bbe-7252-4213-be37-0916c3c6c1c0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mghr2" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.463246 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dfe3972b-7772-422d-8e45-fcf803a7d302-images\") pod \"machine-config-operator-74547568cd-4drs2\" (UID: \"dfe3972b-7772-422d-8e45-fcf803a7d302\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4drs2" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.463433 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9865b222-071e-470e-8731-aed26f1bce5b-config\") pod \"kube-controller-manager-operator-78b949d7b-cjc4q\" (UID: \"9865b222-071e-470e-8731-aed26f1bce5b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cjc4q" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.463896 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dfe3972b-7772-422d-8e45-fcf803a7d302-images\") pod \"machine-config-operator-74547568cd-4drs2\" (UID: \"dfe3972b-7772-422d-8e45-fcf803a7d302\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4drs2" Dec 06 06:27:27 crc kubenswrapper[4823]: E1206 06:27:27.464125 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:27.964117604 +0000 UTC m=+149.249869554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.462154 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq76q"] Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.466058 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/53c6447b-e10a-4b50-86d2-9cd52e4233a4-node-bootstrap-token\") pod \"machine-config-server-txkg5\" (UID: \"53c6447b-e10a-4b50-86d2-9cd52e4233a4\") " pod="openshift-machine-config-operator/machine-config-server-txkg5" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.466279 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/13d85d66-b028-416f-83be-0235701d9b1c-signing-cabundle\") pod \"service-ca-9c57cc56f-h8lzb\" (UID: \"13d85d66-b028-416f-83be-0235701d9b1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-h8lzb" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.466360 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/27d936b5-b671-4d17-b9ee-bff849246c5a-csi-data-dir\") pod \"csi-hostpathplugin-24zhr\" (UID: \"27d936b5-b671-4d17-b9ee-bff849246c5a\") " pod="hostpath-provisioner/csi-hostpathplugin-24zhr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.466518 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/27d936b5-b671-4d17-b9ee-bff849246c5a-plugins-dir\") pod \"csi-hostpathplugin-24zhr\" (UID: \"27d936b5-b671-4d17-b9ee-bff849246c5a\") " pod="hostpath-provisioner/csi-hostpathplugin-24zhr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.466711 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/27d936b5-b671-4d17-b9ee-bff849246c5a-registration-dir\") pod \"csi-hostpathplugin-24zhr\" (UID: \"27d936b5-b671-4d17-b9ee-bff849246c5a\") " pod="hostpath-provisioner/csi-hostpathplugin-24zhr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.467231 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7768dedd-2688-4975-ac80-cc98b354e7a5-config-volume\") pod \"dns-default-nbvlv\" (UID: \"7768dedd-2688-4975-ac80-cc98b354e7a5\") " pod="openshift-dns/dns-default-nbvlv" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.468074 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95a0336c-e3d8-4290-a0bb-62d7a7f357ba-config-volume\") pod \"collect-profiles-29416695-czmn9\" (UID: \"95a0336c-e3d8-4290-a0bb-62d7a7f357ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-czmn9" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.469176 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/96b2e694-650a-4367-966b-4b53e8313c65-srv-cert\") pod \"olm-operator-6b444d44fb-9bc9c\" (UID: \"96b2e694-650a-4367-966b-4b53e8313c65\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9bc9c" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.469254 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dfe3972b-7772-422d-8e45-fcf803a7d302-proxy-tls\") pod \"machine-config-operator-74547568cd-4drs2\" (UID: \"dfe3972b-7772-422d-8e45-fcf803a7d302\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4drs2" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.470135 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53a18f23-f29e-43ba-8568-855cb4550b7b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2cjj5\" (UID: \"53a18f23-f29e-43ba-8568-855cb4550b7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-2cjj5" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.470409 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3167c5-aeb3-4429-aca5-068eb77856f2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bqbf4\" (UID: \"1a3167c5-aeb3-4429-aca5-068eb77856f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bqbf4" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.470764 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/27d936b5-b671-4d17-b9ee-bff849246c5a-mountpoint-dir\") pod \"csi-hostpathplugin-24zhr\" (UID: \"27d936b5-b671-4d17-b9ee-bff849246c5a\") " pod="hostpath-provisioner/csi-hostpathplugin-24zhr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.470884 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/27d936b5-b671-4d17-b9ee-bff849246c5a-socket-dir\") pod \"csi-hostpathplugin-24zhr\" (UID: \"27d936b5-b671-4d17-b9ee-bff849246c5a\") " pod="hostpath-provisioner/csi-hostpathplugin-24zhr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.471640 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/69358a8e-999e-4fe5-881a-5868db35885c-cert\") pod \"ingress-canary-cd5rf\" (UID: \"69358a8e-999e-4fe5-881a-5868db35885c\") " pod="openshift-ingress-canary/ingress-canary-cd5rf" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.472139 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3167c5-aeb3-4429-aca5-068eb77856f2-config\") pod \"kube-apiserver-operator-766d6c64bb-bqbf4\" (UID: \"1a3167c5-aeb3-4429-aca5-068eb77856f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bqbf4" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.472356 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/53c6447b-e10a-4b50-86d2-9cd52e4233a4-certs\") pod \"machine-config-server-txkg5\" (UID: \"53c6447b-e10a-4b50-86d2-9cd52e4233a4\") " pod="openshift-machine-config-operator/machine-config-server-txkg5" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.472843 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9e474bbe-7252-4213-be37-0916c3c6c1c0-tmpfs\") pod \"packageserver-d55dfcdfc-mghr2\" (UID: \"9e474bbe-7252-4213-be37-0916c3c6c1c0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mghr2" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.472772 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc6d436c-d1d0-40c8-b30a-86f201725949-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nsc26\" (UID: \"fc6d436c-d1d0-40c8-b30a-86f201725949\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nsc26" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.473169 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a587710-6059-48a3-a8c4-321a46b85508-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-77z7z\" (UID: \"9a587710-6059-48a3-a8c4-321a46b85508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77z7z" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.473478 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dfe3972b-7772-422d-8e45-fcf803a7d302-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4drs2\" (UID: \"dfe3972b-7772-422d-8e45-fcf803a7d302\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4drs2" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.473979 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a587710-6059-48a3-a8c4-321a46b85508-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-77z7z\" (UID: \"9a587710-6059-48a3-a8c4-321a46b85508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77z7z" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.476378 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/54f75ed5-6c32-4667-8209-bbf6ffb81043-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6bzcc\" (UID: \"54f75ed5-6c32-4667-8209-bbf6ffb81043\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6bzcc" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.476400 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8b36d44-6eab-4f81-bd7c-a0887b7ba1bc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xnpns\" (UID: \"b8b36d44-6eab-4f81-bd7c-a0887b7ba1bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnpns" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.477439 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9e474bbe-7252-4213-be37-0916c3c6c1c0-webhook-cert\") pod \"packageserver-d55dfcdfc-mghr2\" (UID: \"9e474bbe-7252-4213-be37-0916c3c6c1c0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mghr2" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.478576 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc6d436c-d1d0-40c8-b30a-86f201725949-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nsc26\" (UID: \"fc6d436c-d1d0-40c8-b30a-86f201725949\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nsc26" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.478834 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7768dedd-2688-4975-ac80-cc98b354e7a5-metrics-tls\") pod \"dns-default-nbvlv\" (UID: \"7768dedd-2688-4975-ac80-cc98b354e7a5\") " pod="openshift-dns/dns-default-nbvlv" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.480703 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/66a38981-0dd2-4411-897f-4289cce13349-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-pr5dh\" (UID: \"66a38981-0dd2-4411-897f-4289cce13349\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pr5dh" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.481250 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/53a18f23-f29e-43ba-8568-855cb4550b7b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2cjj5\" (UID: \"53a18f23-f29e-43ba-8568-855cb4550b7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-2cjj5" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.481433 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95a0336c-e3d8-4290-a0bb-62d7a7f357ba-secret-volume\") pod \"collect-profiles-29416695-czmn9\" (UID: \"95a0336c-e3d8-4290-a0bb-62d7a7f357ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-czmn9" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.481814 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/96b2e694-650a-4367-966b-4b53e8313c65-profile-collector-cert\") pod \"olm-operator-6b444d44fb-9bc9c\" (UID: \"96b2e694-650a-4367-966b-4b53e8313c65\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9bc9c" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.483274 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9e474bbe-7252-4213-be37-0916c3c6c1c0-apiservice-cert\") pod \"packageserver-d55dfcdfc-mghr2\" (UID: \"9e474bbe-7252-4213-be37-0916c3c6c1c0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mghr2" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.484928 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9865b222-071e-470e-8731-aed26f1bce5b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-cjc4q\" (UID: \"9865b222-071e-470e-8731-aed26f1bce5b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cjc4q" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.485122 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qtfn6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.491309 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/87cf77eb-9dc0-43ee-a48b-78a38378b0f1-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vmw25\" (UID: \"87cf77eb-9dc0-43ee-a48b-78a38378b0f1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vmw25" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.491610 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/872f6308-0267-41cd-bc92-e401f9d7cda9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bf79s\" (UID: \"872f6308-0267-41cd-bc92-e401f9d7cda9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bf79s" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.495804 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/87cf77eb-9dc0-43ee-a48b-78a38378b0f1-proxy-tls\") pod \"machine-config-controller-84d6567774-vmw25\" (UID: \"87cf77eb-9dc0-43ee-a48b-78a38378b0f1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vmw25" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.497230 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/13d85d66-b028-416f-83be-0235701d9b1c-signing-key\") pod \"service-ca-9c57cc56f-h8lzb\" (UID: \"13d85d66-b028-416f-83be-0235701d9b1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-h8lzb" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.501931 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cphfr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.511390 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rq4rk" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.516403 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3f369975-6444-44d7-b85f-290ec604b172-bound-sa-token\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.520686 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bf79s" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.535091 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvzxn\" (UniqueName: \"kubernetes.io/projected/2c06adc8-f875-4eaf-929f-e703320771d1-kube-api-access-zvzxn\") pod \"service-ca-operator-777779d784-cjkks\" (UID: \"2c06adc8-f875-4eaf-929f-e703320771d1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjkks" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.552382 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6d8v\" (UniqueName: \"kubernetes.io/projected/4051e58f-39a6-4ec5-a171-c5c700d4576a-kube-api-access-d6d8v\") pod \"etcd-operator-b45778765-2d7rr\" (UID: \"4051e58f-39a6-4ec5-a171-c5c700d4576a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2d7rr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.552754 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjkks" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.553806 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bmbbn"] Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.560867 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2d7rr" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.562624 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8slc"] Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.565439 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:27 crc kubenswrapper[4823]: E1206 06:27:27.565627 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:28.065605078 +0000 UTC m=+149.351357038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.565770 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: E1206 06:27:27.566333 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:28.066317867 +0000 UTC m=+149.352069827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.570102 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7mh4\" (UniqueName: \"kubernetes.io/projected/3f369975-6444-44d7-b85f-290ec604b172-kube-api-access-f7mh4\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.575956 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-4rlt6" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.580792 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d48ns\" (UniqueName: \"kubernetes.io/projected/3a0d23be-9267-406c-a67e-6970f6e8b922-kube-api-access-d48ns\") pod \"apiserver-76f77b778f-fwj5n\" (UID: \"3a0d23be-9267-406c-a67e-6970f6e8b922\") " pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.622515 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbz7w\" (UniqueName: \"kubernetes.io/projected/2f7a2cf2-a349-4491-88b1-c46345bc28a7-kube-api-access-cbz7w\") pod \"migrator-59844c95c7-4hh5t\" (UID: \"2f7a2cf2-a349-4491-88b1-c46345bc28a7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4hh5t" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.623027 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xmzfs"] Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.624397 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7"] Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.647211 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9nld\" (UniqueName: \"kubernetes.io/projected/b8b36d44-6eab-4f81-bd7c-a0887b7ba1bc-kube-api-access-v9nld\") pod \"control-plane-machine-set-operator-78cbb6b69f-xnpns\" (UID: \"b8b36d44-6eab-4f81-bd7c-a0887b7ba1bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnpns" Dec 06 06:27:27 crc kubenswrapper[4823]: W1206 06:27:27.656763 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e640ad8_265d_4c39_976e_3e772057d0d0.slice/crio-e74dfb988063151e8fa74ce822ae5be063c321845359de7cc65e4f7ebce42ffd WatchSource:0}: Error finding container e74dfb988063151e8fa74ce822ae5be063c321845359de7cc65e4f7ebce42ffd: Status 404 returned error can't find the container with id e74dfb988063151e8fa74ce822ae5be063c321845359de7cc65e4f7ebce42ffd Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.658531 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqsf2\" (UniqueName: \"kubernetes.io/projected/9e474bbe-7252-4213-be37-0916c3c6c1c0-kube-api-access-cqsf2\") pod \"packageserver-d55dfcdfc-mghr2\" (UID: \"9e474bbe-7252-4213-be37-0916c3c6c1c0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mghr2" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.669913 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.669972 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v74ml"] Dec 06 06:27:27 crc kubenswrapper[4823]: E1206 06:27:27.670260 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:28.170239837 +0000 UTC m=+149.455991797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.680026 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9865b222-071e-470e-8731-aed26f1bce5b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-cjc4q\" (UID: \"9865b222-071e-470e-8731-aed26f1bce5b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cjc4q" Dec 06 06:27:27 crc kubenswrapper[4823]: W1206 06:27:27.696710 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7602e8b_a241_4125_aa10_3e9c5a2ca5bd.slice/crio-e349dc7ed3f6a946332eb6628609ee3667997ba58a4dfe4e91a11d93e078040c WatchSource:0}: Error finding container e349dc7ed3f6a946332eb6628609ee3667997ba58a4dfe4e91a11d93e078040c: Status 404 returned error can't find the container with id e349dc7ed3f6a946332eb6628609ee3667997ba58a4dfe4e91a11d93e078040c Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.701882 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4hh5t" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.702025 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rshlz\" (UniqueName: \"kubernetes.io/projected/dfe3972b-7772-422d-8e45-fcf803a7d302-kube-api-access-rshlz\") pod \"machine-config-operator-74547568cd-4drs2\" (UID: \"dfe3972b-7772-422d-8e45-fcf803a7d302\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4drs2" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.722790 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw6vq\" (UniqueName: \"kubernetes.io/projected/95a0336c-e3d8-4290-a0bb-62d7a7f357ba-kube-api-access-vw6vq\") pod \"collect-profiles-29416695-czmn9\" (UID: \"95a0336c-e3d8-4290-a0bb-62d7a7f357ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-czmn9" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.737732 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9chr\" (UniqueName: \"kubernetes.io/projected/66a38981-0dd2-4411-897f-4289cce13349-kube-api-access-t9chr\") pod \"multus-admission-controller-857f4d67dd-pr5dh\" (UID: \"66a38981-0dd2-4411-897f-4289cce13349\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pr5dh" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.762005 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rhkr\" (UniqueName: \"kubernetes.io/projected/96b2e694-650a-4367-966b-4b53e8313c65-kube-api-access-6rhkr\") pod \"olm-operator-6b444d44fb-9bc9c\" (UID: \"96b2e694-650a-4367-966b-4b53e8313c65\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9bc9c" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.771445 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: E1206 06:27:27.771950 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:28.271934946 +0000 UTC m=+149.557686906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.779240 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sck8d\" (UniqueName: \"kubernetes.io/projected/7768dedd-2688-4975-ac80-cc98b354e7a5-kube-api-access-sck8d\") pod \"dns-default-nbvlv\" (UID: \"7768dedd-2688-4975-ac80-cc98b354e7a5\") " pod="openshift-dns/dns-default-nbvlv" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.794582 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.802227 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc6d436c-d1d0-40c8-b30a-86f201725949-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nsc26\" (UID: \"fc6d436c-d1d0-40c8-b30a-86f201725949\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nsc26" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.826097 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mv9q\" (UniqueName: \"kubernetes.io/projected/9a587710-6059-48a3-a8c4-321a46b85508-kube-api-access-9mv9q\") pod \"kube-storage-version-migrator-operator-b67b599dd-77z7z\" (UID: \"9a587710-6059-48a3-a8c4-321a46b85508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77z7z" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.830359 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cphfr"] Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.840192 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc4w8\" (UniqueName: \"kubernetes.io/projected/53c6447b-e10a-4b50-86d2-9cd52e4233a4-kube-api-access-tc4w8\") pod \"machine-config-server-txkg5\" (UID: \"53c6447b-e10a-4b50-86d2-9cd52e4233a4\") " pod="openshift-machine-config-operator/machine-config-server-txkg5" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.849160 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rq4rk"] Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.864555 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wxd6\" (UniqueName: \"kubernetes.io/projected/54f75ed5-6c32-4667-8209-bbf6ffb81043-kube-api-access-7wxd6\") pod \"package-server-manager-789f6589d5-6bzcc\" (UID: \"54f75ed5-6c32-4667-8209-bbf6ffb81043\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6bzcc" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.866457 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9bc9c" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.872457 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:27 crc kubenswrapper[4823]: E1206 06:27:27.873737 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:28.373713888 +0000 UTC m=+149.659465848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.879309 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zz2v\" (UniqueName: \"kubernetes.io/projected/27d936b5-b671-4d17-b9ee-bff849246c5a-kube-api-access-4zz2v\") pod \"csi-hostpathplugin-24zhr\" (UID: \"27d936b5-b671-4d17-b9ee-bff849246c5a\") " pod="hostpath-provisioner/csi-hostpathplugin-24zhr" Dec 06 06:27:27 crc kubenswrapper[4823]: W1206 06:27:27.880759 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6434f84a_f26e_4bf8_8ea9_cbc987ad0b1e.slice/crio-098c4211c364765eb405b4d910b07b4803f43cf427cbdfca1168f7b1519df4cc WatchSource:0}: Error finding container 098c4211c364765eb405b4d910b07b4803f43cf427cbdfca1168f7b1519df4cc: Status 404 returned error can't find the container with id 098c4211c364765eb405b4d910b07b4803f43cf427cbdfca1168f7b1519df4cc Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.884815 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6bzcc" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.885498 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5" event={"ID":"384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e","Type":"ContainerStarted","Data":"c00ef277082b15665e136ebbb5018a97cf155dc39c308c78ba0727718b5ff5cf"} Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.885542 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5" event={"ID":"384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e","Type":"ContainerStarted","Data":"f8077a323114c5f6f30ba3cf97c425a5b118788b392198f289a2a973973d3306"} Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.893176 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-czmn9" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.894582 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bf79s"] Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.897077 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq76q" event={"ID":"8de1c076-eee2-4e97-9916-3ff159867471","Type":"ContainerStarted","Data":"9f5ba23f5d64eef76673491e6017882731803e92a5afd80be8a8c6a6ec55c1a9"} Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.899854 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-96764" event={"ID":"7df56bd6-4c2c-4432-b312-51019bf0f458","Type":"ContainerStarted","Data":"c59689787f060e727f04677ad622936dc3fd55f135999fb0943da0783b0cb072"} Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.899901 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-96764" event={"ID":"7df56bd6-4c2c-4432-b312-51019bf0f458","Type":"ContainerStarted","Data":"ccc801f36e2437abf2cafa83ff354c1412d7524cafeeea55cfa5898c7fc72ae1"} Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.903365 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mghr2" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.917107 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4drs2" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.917228 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a3167c5-aeb3-4429-aca5-068eb77856f2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bqbf4\" (UID: \"1a3167c5-aeb3-4429-aca5-068eb77856f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bqbf4" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.917465 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-xmzfs" event={"ID":"0e640ad8-265d-4c39-976e-3e772057d0d0","Type":"ContainerStarted","Data":"e74dfb988063151e8fa74ce822ae5be063c321845359de7cc65e4f7ebce42ffd"} Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.923512 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5rbww" event={"ID":"913bebf0-c7cd-40f4-b429-fe18368c8076","Type":"ContainerStarted","Data":"85f6b3116033edc7214d7d6e02e0634e4d4110cdc34261e78d0fe4a8b2af70c4"} Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.923561 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5rbww" event={"ID":"913bebf0-c7cd-40f4-b429-fe18368c8076","Type":"ContainerStarted","Data":"1440fc21bd4d3abd8452311349536af584dfe424381c87ddfafe102d30382549"} Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.924330 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cpg6\" (UniqueName: \"kubernetes.io/projected/53a18f23-f29e-43ba-8568-855cb4550b7b-kube-api-access-7cpg6\") pod \"marketplace-operator-79b997595-2cjj5\" (UID: \"53a18f23-f29e-43ba-8568-855cb4550b7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-2cjj5" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.926267 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-j27ls" event={"ID":"15cb22b1-04c0-45b6-81fa-9cb976a1aecb","Type":"ContainerStarted","Data":"0fae37c782b46a2e9574135df04658906a2356e67d12246a714faf11cf8514f7"} Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.927706 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnpns" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.935440 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" event={"ID":"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f","Type":"ContainerStarted","Data":"3ae501e5cba20fed10fa6d283353c6e0feebbdea80bd6e85efe7ccde6ecd5303"} Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.936854 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" event={"ID":"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52","Type":"ContainerStarted","Data":"89b2d8bb1069cb12c0f452071c50628dfab55c6cff0b0a1f94b1e14390797359"} Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.936875 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" event={"ID":"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52","Type":"ContainerStarted","Data":"b16de22ca61469fc2642eec1ee8b674ddd59a61e6745c38545ba715654feb47c"} Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.938113 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzntn\" (UniqueName: \"kubernetes.io/projected/69358a8e-999e-4fe5-881a-5868db35885c-kube-api-access-kzntn\") pod \"ingress-canary-cd5rf\" (UID: \"69358a8e-999e-4fe5-881a-5868db35885c\") " pod="openshift-ingress-canary/ingress-canary-cd5rf" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.944611 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2cjj5" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.944995 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"40c33e86328628fbe682cfc608f54c301f28ea5a72531b057fc1585ee0f81970"} Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.945132 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.947562 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9g789" event={"ID":"ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51","Type":"ContainerStarted","Data":"dd9658ce5da80535833e84b0decaaad9053f2ec5884ed9e68e2d302ca3b4ee07"} Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.947608 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9g789" event={"ID":"ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51","Type":"ContainerStarted","Data":"0a859f6815cb0effe8cbf809b207d0541dd98d71fe2147de3a7eb5b4d1e3f2aa"} Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.948591 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-9g789" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.954500 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77z7z" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.955792 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-9g789 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.955835 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9g789" podUID="ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.957372 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cjkks"] Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.959595 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8slc" event={"ID":"b88f4c63-eec1-4f9c-97d9-5d0b1eae95a1","Type":"ContainerStarted","Data":"46092483c609647210ea08366f1bc1737e85dc6dc91a2a65927615a9078e2eac"} Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.962270 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cjc4q" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.962556 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" event={"ID":"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2","Type":"ContainerStarted","Data":"fe680b0e33cf433e88c4b53d11331c8c1f0de0da7656391359ada749b29e81ab"} Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.965066 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wzsch" event={"ID":"e802aa0a-cd13-43df-be69-40b0bca7200f","Type":"ContainerStarted","Data":"1d87b4ed24bc0cd85b3c61eca48f485fbdb095433efed6a335a8f4c27aad7936"} Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.966701 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv6j9\" (UniqueName: \"kubernetes.io/projected/87cf77eb-9dc0-43ee-a48b-78a38378b0f1-kube-api-access-cv6j9\") pod \"machine-config-controller-84d6567774-vmw25\" (UID: \"87cf77eb-9dc0-43ee-a48b-78a38378b0f1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vmw25" Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.967911 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v74ml" event={"ID":"f7602e8b-a241-4125-aa10-3e9c5a2ca5bd","Type":"ContainerStarted","Data":"e349dc7ed3f6a946332eb6628609ee3667997ba58a4dfe4e91a11d93e078040c"} Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.969961 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qtfn6" event={"ID":"b0483a63-0788-40f1-9d6e-6f3377195729","Type":"ContainerStarted","Data":"d47bec31b2a59c05fe186ff8eb1b8f7b5f851c8e14c48c7c28019c81e596dac7"} Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.974547 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:27 crc kubenswrapper[4823]: E1206 06:27:27.974960 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:28.474941394 +0000 UTC m=+149.760693394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:27 crc kubenswrapper[4823]: I1206 06:27:27.977536 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgfcf\" (UniqueName: \"kubernetes.io/projected/13d85d66-b028-416f-83be-0235701d9b1c-kube-api-access-zgfcf\") pod \"service-ca-9c57cc56f-h8lzb\" (UID: \"13d85d66-b028-416f-83be-0235701d9b1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-h8lzb" Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.005052 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-pr5dh" Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.005181 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bqbf4" Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.008124 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2d7rr"] Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.011594 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nsc26" Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.032063 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-24zhr" Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.040238 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nbvlv" Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.049842 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cd5rf" Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.057428 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-txkg5" Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.075174 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:28 crc kubenswrapper[4823]: E1206 06:27:28.075324 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:28.575297837 +0000 UTC m=+149.861049787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.075459 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:28 crc kubenswrapper[4823]: E1206 06:27:28.075837 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:28.575826262 +0000 UTC m=+149.861578222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.176575 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:28 crc kubenswrapper[4823]: E1206 06:27:28.176863 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:28.676848022 +0000 UTC m=+149.962599982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.219575 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vmw25" Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.277656 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-h8lzb" Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.278284 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:28 crc kubenswrapper[4823]: E1206 06:27:28.278617 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:28.778602123 +0000 UTC m=+150.064354083 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.351136 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4hh5t"] Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.385360 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:28 crc kubenswrapper[4823]: E1206 06:27:28.385713 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:28.885687709 +0000 UTC m=+150.171439669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.434123 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fwj5n"] Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.484595 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nbvlv"] Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.488630 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:28 crc kubenswrapper[4823]: E1206 06:27:28.488943 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:28.98892871 +0000 UTC m=+150.274680670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.528874 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6bzcc"] Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.589642 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:28 crc kubenswrapper[4823]: E1206 06:27:28.589757 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:29.089733045 +0000 UTC m=+150.375485005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.589903 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:28 crc kubenswrapper[4823]: E1206 06:27:28.590276 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:29.09026181 +0000 UTC m=+150.376013780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.623456 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416695-czmn9"] Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.669210 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nsc26"] Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.691352 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:28 crc kubenswrapper[4823]: E1206 06:27:28.691486 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:29.191460615 +0000 UTC m=+150.477212575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.691881 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:28 crc kubenswrapper[4823]: E1206 06:27:28.713758 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:29.213740012 +0000 UTC m=+150.499491972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:28 crc kubenswrapper[4823]: W1206 06:27:28.783106 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc6d436c_d1d0_40c8_b30a_86f201725949.slice/crio-d2fe30b700d625c6a5a5c1c8fb729b82f1afe207a0834f4687ee198a6ddc1117 WatchSource:0}: Error finding container d2fe30b700d625c6a5a5c1c8fb729b82f1afe207a0834f4687ee198a6ddc1117: Status 404 returned error can't find the container with id d2fe30b700d625c6a5a5c1c8fb729b82f1afe207a0834f4687ee198a6ddc1117 Dec 06 06:27:28 crc kubenswrapper[4823]: W1206 06:27:28.783721 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95a0336c_e3d8_4290_a0bb_62d7a7f357ba.slice/crio-74516f7e8a6d3477f811c9a91127987b53b392432fdca1c2a235d7e745f9b2da WatchSource:0}: Error finding container 74516f7e8a6d3477f811c9a91127987b53b392432fdca1c2a235d7e745f9b2da: Status 404 returned error can't find the container with id 74516f7e8a6d3477f811c9a91127987b53b392432fdca1c2a235d7e745f9b2da Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.797388 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:28 crc kubenswrapper[4823]: E1206 06:27:28.797916 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:29.297895484 +0000 UTC m=+150.583647444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.875763 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-9g789" podStartSLOduration=125.875744744 podStartE2EDuration="2m5.875744744s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:28.849109808 +0000 UTC m=+150.134861768" watchObservedRunningTime="2025-12-06 06:27:28.875744744 +0000 UTC m=+150.161496694" Dec 06 06:27:28 crc kubenswrapper[4823]: I1206 06:27:28.899601 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:28 crc kubenswrapper[4823]: E1206 06:27:28.900038 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:29.400024435 +0000 UTC m=+150.685776395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.015421 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bmbbn" event={"ID":"4351e984-b1eb-4ffc-96b8-5e37536b79da","Type":"ContainerStarted","Data":"d75ef49a8e1c00151d94c61a94b3c838349c296c41679b495db7a9910e4665d3"} Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.018571 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nbvlv" event={"ID":"7768dedd-2688-4975-ac80-cc98b354e7a5","Type":"ContainerStarted","Data":"1a4b060861541dc26efa59f22ee258b103d031a4db0cb16c0dd2be58244e1810"} Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.027437 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:29 crc kubenswrapper[4823]: E1206 06:27:29.027708 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:29.527684531 +0000 UTC m=+150.813436491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.028779 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:29 crc kubenswrapper[4823]: E1206 06:27:29.029186 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:29.529171381 +0000 UTC m=+150.814923341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.038875 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5rbww" event={"ID":"913bebf0-c7cd-40f4-b429-fe18368c8076","Type":"ContainerStarted","Data":"04c30ea2acae0a4fc302d3eec1dfa70449fff0e786a49e9d3cdb148068fca1f3"} Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.085024 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v74ml" event={"ID":"f7602e8b-a241-4125-aa10-3e9c5a2ca5bd","Type":"ContainerStarted","Data":"7f3444102d0acb6d4fedb3f481466de21e983b341c5613dc4b0314053f95145a"} Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.087899 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-xmzfs" event={"ID":"0e640ad8-265d-4c39-976e-3e772057d0d0","Type":"ContainerStarted","Data":"bf247aceff8b3a0d773ff7a99f5feeac1bebe3f50e2dd49bd864cfb3843059f4"} Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.094420 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" event={"ID":"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f","Type":"ContainerStarted","Data":"79d6f959faba29907ac2c8c8fcc0cad26f3757850c6d9c5a585ef0d9cafd8edf"} Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.098590 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.100909 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-txkg5" event={"ID":"53c6447b-e10a-4b50-86d2-9cd52e4233a4","Type":"ContainerStarted","Data":"765d00da0098ab5fdc87947ac39f25a71e3e211853c80f26c55468da5cabcf7c"} Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.109523 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6bzcc" event={"ID":"54f75ed5-6c32-4667-8209-bbf6ffb81043","Type":"ContainerStarted","Data":"7c62bb0eb107d3f45f333ec136ed9d5292f26c79ddd00c9cde40c8fcb0c11f99"} Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.113570 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wzsch" event={"ID":"e802aa0a-cd13-43df-be69-40b0bca7200f","Type":"ContainerStarted","Data":"d53ba278867c2d1dc56b9f03fb96868d460414484050feb3a27fef6d628fa457"} Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.116128 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nsc26" event={"ID":"fc6d436c-d1d0-40c8-b30a-86f201725949","Type":"ContainerStarted","Data":"d2fe30b700d625c6a5a5c1c8fb729b82f1afe207a0834f4687ee198a6ddc1117"} Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.117445 4823 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-rnpnm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.117485 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" podUID="9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.119474 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cphfr" event={"ID":"2e10270a-5495-4824-b934-523a90d07dca","Type":"ContainerStarted","Data":"b7f4c619d910ed47068fc0f7f36bf6ed11598f155c8b3c4a39a4057bd552cd27"} Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.123689 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2d7rr" event={"ID":"4051e58f-39a6-4ec5-a171-c5c700d4576a","Type":"ContainerStarted","Data":"aee841f5ad72745f60ba0a3a21f21b992b4a9583e0bab762f412f0619588ee12"} Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.128905 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mghr2"] Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.130801 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:29 crc kubenswrapper[4823]: E1206 06:27:29.130935 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:29.630916632 +0000 UTC m=+150.916668592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.131068 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.131515 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bf79s" event={"ID":"872f6308-0267-41cd-bc92-e401f9d7cda9","Type":"ContainerStarted","Data":"ec5800e23fd4da8303fd2be5d784d6a8ea53f40fd93418e53963aaa2d7477557"} Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.133854 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8slc" event={"ID":"b88f4c63-eec1-4f9c-97d9-5d0b1eae95a1","Type":"ContainerStarted","Data":"6454f3c62dfa85e761ab8ee9b5e29718cfe1c919fda4c90cec46dc2f346e756e"} Dec 06 06:27:29 crc kubenswrapper[4823]: E1206 06:27:29.131655 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:29.631641402 +0000 UTC m=+150.917393362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.137347 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qtfn6" event={"ID":"b0483a63-0788-40f1-9d6e-6f3377195729","Type":"ContainerStarted","Data":"c485d9f833c6b635a7dcfdc603a78606c905e6577448a2671fb1149cc4eca792"} Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.140280 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq76q" event={"ID":"8de1c076-eee2-4e97-9916-3ff159867471","Type":"ContainerStarted","Data":"68c7ff953ebc0e68e634fc379a69bcb739c97210bbca75b95652470829cf9e82"} Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.142232 4823 generic.go:334] "Generic (PLEG): container finished" podID="7df56bd6-4c2c-4432-b312-51019bf0f458" containerID="c59689787f060e727f04677ad622936dc3fd55f135999fb0943da0783b0cb072" exitCode=0 Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.150245 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-96764" event={"ID":"7df56bd6-4c2c-4432-b312-51019bf0f458","Type":"ContainerDied","Data":"c59689787f060e727f04677ad622936dc3fd55f135999fb0943da0783b0cb072"} Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.182781 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" event={"ID":"3a0d23be-9267-406c-a67e-6970f6e8b922","Type":"ContainerStarted","Data":"e385ec2a5291f1c492273efc7c92f9a7a8e7885cd0f8dad71a78329468cc41f9"} Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.182821 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjkks" event={"ID":"2c06adc8-f875-4eaf-929f-e703320771d1","Type":"ContainerStarted","Data":"99820c6e33334d69070e5466af4fa0aa9546cc971e9427500adbd7ad41991f75"} Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.182838 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-4rlt6" event={"ID":"6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e","Type":"ContainerStarted","Data":"f47de08637fab17db2828fe6fee2a77666dfdd0dc67b7a7ce90af8f9683c9c8f"} Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.182854 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-4rlt6" event={"ID":"6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e","Type":"ContainerStarted","Data":"098c4211c364765eb405b4d910b07b4803f43cf427cbdfca1168f7b1519df4cc"} Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.182868 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rq4rk" event={"ID":"45d8cc05-e612-40f5-a5dd-57d7abbadc51","Type":"ContainerStarted","Data":"5baf38a8a87852f80022af9f75d6d2550c33e471701a218443da93ae9ea5b79c"} Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.182880 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-czmn9" event={"ID":"95a0336c-e3d8-4290-a0bb-62d7a7f357ba","Type":"ContainerStarted","Data":"74516f7e8a6d3477f811c9a91127987b53b392432fdca1c2a235d7e745f9b2da"} Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.182894 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4hh5t" event={"ID":"2f7a2cf2-a349-4491-88b1-c46345bc28a7","Type":"ContainerStarted","Data":"bce29ccfff0575278d1ecca4fc15457f72198ee735703329e9809578dd08b22d"} Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.196055 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5" Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.196099 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.196180 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-9g789 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.196211 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9g789" podUID="ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.208929 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5" Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.216127 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.232148 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:29 crc kubenswrapper[4823]: E1206 06:27:29.233408 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:29.733383862 +0000 UTC m=+151.019135832 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.336416 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:29 crc kubenswrapper[4823]: E1206 06:27:29.337158 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:29.837140808 +0000 UTC m=+151.122892768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.437544 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:29 crc kubenswrapper[4823]: E1206 06:27:29.437927 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:29.937910732 +0000 UTC m=+151.223662692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.488128 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77z7z"] Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.538894 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:29 crc kubenswrapper[4823]: E1206 06:27:29.541845 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:30.041829421 +0000 UTC m=+151.327581381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.602339 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bqbf4"] Dec 06 06:27:29 crc kubenswrapper[4823]: W1206 06:27:29.615760 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a3167c5_aeb3_4429_aca5_068eb77856f2.slice/crio-d453d5cc3366d6502a5f1929c1e81d2d6f9922f47c17f6e293b678b3e4a6194e WatchSource:0}: Error finding container d453d5cc3366d6502a5f1929c1e81d2d6f9922f47c17f6e293b678b3e4a6194e: Status 404 returned error can't find the container with id d453d5cc3366d6502a5f1929c1e81d2d6f9922f47c17f6e293b678b3e4a6194e Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.630611 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4drs2"] Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.642720 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:29 crc kubenswrapper[4823]: E1206 06:27:29.643128 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:30.143110379 +0000 UTC m=+151.428862339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:29 crc kubenswrapper[4823]: W1206 06:27:29.651305 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53a18f23_f29e_43ba_8568_855cb4550b7b.slice/crio-5151646f3ee75f326c07c901c17bf43d448523955a014bab9bfb68970f26cb25 WatchSource:0}: Error finding container 5151646f3ee75f326c07c901c17bf43d448523955a014bab9bfb68970f26cb25: Status 404 returned error can't find the container with id 5151646f3ee75f326c07c901c17bf43d448523955a014bab9bfb68970f26cb25 Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.652087 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cjc4q"] Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.657192 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9bc9c"] Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.661171 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2cjj5"] Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.663435 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-h8lzb"] Dec 06 06:27:29 crc kubenswrapper[4823]: W1206 06:27:29.663840 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfe3972b_7772_422d_8e45_fcf803a7d302.slice/crio-c9799e92b1fc65d251bf7ec1ae920950f0af031da20b6158ec94d1c3c9bd7694 WatchSource:0}: Error finding container c9799e92b1fc65d251bf7ec1ae920950f0af031da20b6158ec94d1c3c9bd7694: Status 404 returned error can't find the container with id c9799e92b1fc65d251bf7ec1ae920950f0af031da20b6158ec94d1c3c9bd7694 Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.668131 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-pr5dh"] Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.675997 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5" podStartSLOduration=126.675982234 podStartE2EDuration="2m6.675982234s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:29.675051389 +0000 UTC m=+150.960803349" watchObservedRunningTime="2025-12-06 06:27:29.675982234 +0000 UTC m=+150.961734194" Dec 06 06:27:29 crc kubenswrapper[4823]: W1206 06:27:29.690818 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96b2e694_650a_4367_966b_4b53e8313c65.slice/crio-f50de0867dfb4b6ab1a99da4b0b8b8d0fa1a80a97be4a4daa7db6b92db041704 WatchSource:0}: Error finding container f50de0867dfb4b6ab1a99da4b0b8b8d0fa1a80a97be4a4daa7db6b92db041704: Status 404 returned error can't find the container with id f50de0867dfb4b6ab1a99da4b0b8b8d0fa1a80a97be4a4daa7db6b92db041704 Dec 06 06:27:29 crc kubenswrapper[4823]: W1206 06:27:29.706726 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13d85d66_b028_416f_83be_0235701d9b1c.slice/crio-d3c4cf91f52296047b15e751ecca9a21c2147f20098e7c088176f1ed001a688c WatchSource:0}: Error finding container d3c4cf91f52296047b15e751ecca9a21c2147f20098e7c088176f1ed001a688c: Status 404 returned error can't find the container with id d3c4cf91f52296047b15e751ecca9a21c2147f20098e7c088176f1ed001a688c Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.712301 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnpns"] Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.719557 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vmw25"] Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.743592 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-24zhr"] Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.743824 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:29 crc kubenswrapper[4823]: E1206 06:27:29.744105 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:30.244092059 +0000 UTC m=+151.529844019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:29 crc kubenswrapper[4823]: W1206 06:27:29.799822 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87cf77eb_9dc0_43ee_a48b_78a38378b0f1.slice/crio-f6629f379b216c2e83b0baeb2bf971d9955416cef343fcf8a18b9c8ece947fc5 WatchSource:0}: Error finding container f6629f379b216c2e83b0baeb2bf971d9955416cef343fcf8a18b9c8ece947fc5: Status 404 returned error can't find the container with id f6629f379b216c2e83b0baeb2bf971d9955416cef343fcf8a18b9c8ece947fc5 Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.844467 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:29 crc kubenswrapper[4823]: E1206 06:27:29.845192 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:30.345178492 +0000 UTC m=+151.630930442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.872067 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cd5rf"] Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.886529 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" podStartSLOduration=126.886506947 podStartE2EDuration="2m6.886506947s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:29.8767023 +0000 UTC m=+151.162454260" watchObservedRunningTime="2025-12-06 06:27:29.886506947 +0000 UTC m=+151.172258897" Dec 06 06:27:29 crc kubenswrapper[4823]: I1206 06:27:29.957699 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:29 crc kubenswrapper[4823]: E1206 06:27:29.958055 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:30.458041185 +0000 UTC m=+151.743793145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.062756 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:30 crc kubenswrapper[4823]: E1206 06:27:30.062852 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:30.562833698 +0000 UTC m=+151.848585658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.063314 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:30 crc kubenswrapper[4823]: E1206 06:27:30.063610 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:30.563599619 +0000 UTC m=+151.849351579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.166145 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:30 crc kubenswrapper[4823]: E1206 06:27:30.166558 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:30.666538352 +0000 UTC m=+151.952290312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.208904 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-txkg5" event={"ID":"53c6447b-e10a-4b50-86d2-9cd52e4233a4","Type":"ContainerStarted","Data":"847785db360469f4c1bac077055c7b277555db91f5827f4aa86ddf8f201bc11e"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.211281 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mghr2" event={"ID":"9e474bbe-7252-4213-be37-0916c3c6c1c0","Type":"ContainerStarted","Data":"7e127fd5dc05f844b7099de05de7db8e80654ebac9774b4b4f9c4deec7bbbf35"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.213308 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cphfr" event={"ID":"2e10270a-5495-4824-b934-523a90d07dca","Type":"ContainerStarted","Data":"34c72f5ad229c812fbb4007ef77446eb444d7c36212a26cd33bc2ec29142a33a"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.220170 4823 generic.go:334] "Generic (PLEG): container finished" podID="f5f016d4-304f-4e8b-b0d8-9445bd44f6d2" containerID="8e8d6c5f98b9ea8123e7c8e4d86a985a64be9270b617c6898905f4b2bf13523c" exitCode=0 Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.220237 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" event={"ID":"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2","Type":"ContainerDied","Data":"8e8d6c5f98b9ea8123e7c8e4d86a985a64be9270b617c6898905f4b2bf13523c"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.222031 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2cjj5" event={"ID":"53a18f23-f29e-43ba-8568-855cb4550b7b","Type":"ContainerStarted","Data":"5151646f3ee75f326c07c901c17bf43d448523955a014bab9bfb68970f26cb25"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.222997 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" podStartSLOduration=127.222984809 podStartE2EDuration="2m7.222984809s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:30.024576707 +0000 UTC m=+151.310328677" watchObservedRunningTime="2025-12-06 06:27:30.222984809 +0000 UTC m=+151.508736769" Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.223645 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-txkg5" podStartSLOduration=6.223639147 podStartE2EDuration="6.223639147s" podCreationTimestamp="2025-12-06 06:27:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:30.220738768 +0000 UTC m=+151.506490738" watchObservedRunningTime="2025-12-06 06:27:30.223639147 +0000 UTC m=+151.509391107" Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.223720 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-czmn9" event={"ID":"95a0336c-e3d8-4290-a0bb-62d7a7f357ba","Type":"ContainerStarted","Data":"b7d08a7b792aed99b3b7b4bc1d85d1df3f88459353465f30190b1fac28347c29"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.226796 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnpns" event={"ID":"b8b36d44-6eab-4f81-bd7c-a0887b7ba1bc","Type":"ContainerStarted","Data":"187ae956b0b52867a77c785e356ed337db40ce475790f01c19e9a2175eb1d9fa"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.227806 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-h8lzb" event={"ID":"13d85d66-b028-416f-83be-0235701d9b1c","Type":"ContainerStarted","Data":"d3c4cf91f52296047b15e751ecca9a21c2147f20098e7c088176f1ed001a688c"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.228784 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bqbf4" event={"ID":"1a3167c5-aeb3-4429-aca5-068eb77856f2","Type":"ContainerStarted","Data":"d453d5cc3366d6502a5f1929c1e81d2d6f9922f47c17f6e293b678b3e4a6194e"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.229758 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjkks" event={"ID":"2c06adc8-f875-4eaf-929f-e703320771d1","Type":"ContainerStarted","Data":"bc2a364ce4d5ce5b497958f950dd946128832e7d68697130221aa33893be7722"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.242699 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cphfr" podStartSLOduration=127.242651645 podStartE2EDuration="2m7.242651645s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:30.235093789 +0000 UTC m=+151.520845749" watchObservedRunningTime="2025-12-06 06:27:30.242651645 +0000 UTC m=+151.528403605" Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.252844 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-pr5dh" event={"ID":"66a38981-0dd2-4411-897f-4289cce13349","Type":"ContainerStarted","Data":"84d1912ae3413b75b78f7bd434fd6c09d3722208d4fe73006df47c1500c8edf1"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.264741 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2d7rr" event={"ID":"4051e58f-39a6-4ec5-a171-c5c700d4576a","Type":"ContainerStarted","Data":"cfd261b8baada4d2a024ca317e9b07b12915d382a47167fcb3801019668c1bf6"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.266780 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-24zhr" event={"ID":"27d936b5-b671-4d17-b9ee-bff849246c5a","Type":"ContainerStarted","Data":"b5b976ffa1ccd1cc87700fc2f2e6761fc5c0a9d16641b286975eeb585f171c2c"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.267301 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:30 crc kubenswrapper[4823]: E1206 06:27:30.270535 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:30.770519934 +0000 UTC m=+152.056271974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.271872 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-czmn9" podStartSLOduration=127.27185612 podStartE2EDuration="2m7.27185612s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:30.269484876 +0000 UTC m=+151.555236846" watchObservedRunningTime="2025-12-06 06:27:30.27185612 +0000 UTC m=+151.557608080" Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.279513 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-96764" event={"ID":"7df56bd6-4c2c-4432-b312-51019bf0f458","Type":"ContainerStarted","Data":"8db05850d27667509280d78b3a51ec336614a37ef57688ade1e15f0630cc81f0"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.285197 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bf79s" event={"ID":"872f6308-0267-41cd-bc92-e401f9d7cda9","Type":"ContainerStarted","Data":"5bcb3ebb55614eb0dd6576bcf75d54cffa6cd11fc33bdfc5b2e044f1ca783e01"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.288425 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77z7z" event={"ID":"9a587710-6059-48a3-a8c4-321a46b85508","Type":"ContainerStarted","Data":"f0e2c203c2644adb0bfa3d5e31dcd79d46e3516d0e183be920520bdf394962cf"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.288475 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77z7z" event={"ID":"9a587710-6059-48a3-a8c4-321a46b85508","Type":"ContainerStarted","Data":"7766b135f1ed033861401f0fe8c920ff8744a0f91657e16a353920b87e9dcbb6"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.295621 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4hh5t" event={"ID":"2f7a2cf2-a349-4491-88b1-c46345bc28a7","Type":"ContainerStarted","Data":"214c7d40a750ab71f88e76693400a7be3409e4a264ada2220123d0f6239e9734"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.300185 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cd5rf" event={"ID":"69358a8e-999e-4fe5-881a-5868db35885c","Type":"ContainerStarted","Data":"dc70c5424b561be9ab8c42a43e865bf6781124f6abc3907c421c635b5cb2591a"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.310205 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vmw25" event={"ID":"87cf77eb-9dc0-43ee-a48b-78a38378b0f1","Type":"ContainerStarted","Data":"f6629f379b216c2e83b0baeb2bf971d9955416cef343fcf8a18b9c8ece947fc5"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.320353 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" event={"ID":"3a0d23be-9267-406c-a67e-6970f6e8b922","Type":"ContainerStarted","Data":"c7600f2178578a8571e68ca8d8008edbc44b93c363d1d14d3d3d649a26eb2b9f"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.343442 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bmbbn" event={"ID":"4351e984-b1eb-4ffc-96b8-5e37536b79da","Type":"ContainerStarted","Data":"fc3c8737a6376c95403e5cb2b5defe4f9349ac7024a5d02cab98a7457916b3ae"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.349186 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-j27ls" event={"ID":"15cb22b1-04c0-45b6-81fa-9cb976a1aecb","Type":"ContainerStarted","Data":"89a4d66f811a0ee470750e4695980bb8c4f3db4ef6b188f95e99675f44cbee28"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.350227 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-j27ls" Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.351638 4823 patch_prober.go:28] interesting pod/console-operator-58897d9998-j27ls container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.351758 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j27ls" podUID="15cb22b1-04c0-45b6-81fa-9cb976a1aecb" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.357259 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6bzcc" event={"ID":"54f75ed5-6c32-4667-8209-bbf6ffb81043","Type":"ContainerStarted","Data":"57befc6337e90961b256bdaed901022b9bffee8f47bf701efef4c0fc3120cc38"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.358208 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4drs2" event={"ID":"dfe3972b-7772-422d-8e45-fcf803a7d302","Type":"ContainerStarted","Data":"c9799e92b1fc65d251bf7ec1ae920950f0af031da20b6158ec94d1c3c9bd7694"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.358947 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cjc4q" event={"ID":"9865b222-071e-470e-8731-aed26f1bce5b","Type":"ContainerStarted","Data":"7c873efb98984b327a03d55131b89dc123f71a0d66969ab9230ae5d053283be4"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.359689 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9bc9c" event={"ID":"96b2e694-650a-4367-966b-4b53e8313c65","Type":"ContainerStarted","Data":"f50de0867dfb4b6ab1a99da4b0b8b8d0fa1a80a97be4a4daa7db6b92db041704"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.361673 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nsc26" event={"ID":"fc6d436c-d1d0-40c8-b30a-86f201725949","Type":"ContainerStarted","Data":"3df5539de2acb438f943732f03d1ad44a4eed56e112f80bdfa2dbf387c0b1bc1"} Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.361701 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v74ml" Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.364381 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-9g789 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.364423 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9g789" podUID="ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.365798 4823 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-rnpnm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.365840 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" podUID="9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.369567 4823 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-v74ml container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.369634 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v74ml" podUID="f7602e8b-a241-4125-aa10-3e9c5a2ca5bd" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.370302 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-j27ls" podStartSLOduration=127.370288381 podStartE2EDuration="2m7.370288381s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:30.36696763 +0000 UTC m=+151.652719590" watchObservedRunningTime="2025-12-06 06:27:30.370288381 +0000 UTC m=+151.656040331" Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.371147 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-2d7rr" podStartSLOduration=127.371140694 podStartE2EDuration="2m7.371140694s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:30.284496684 +0000 UTC m=+151.570248644" watchObservedRunningTime="2025-12-06 06:27:30.371140694 +0000 UTC m=+151.656892654" Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.377806 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:30 crc kubenswrapper[4823]: E1206 06:27:30.378918 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:30.878902845 +0000 UTC m=+152.164654805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.390209 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-wzsch" podStartSLOduration=127.390190363 podStartE2EDuration="2m7.390190363s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:30.383785958 +0000 UTC m=+151.669537918" watchObservedRunningTime="2025-12-06 06:27:30.390190363 +0000 UTC m=+151.675942323" Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.451291 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-xmzfs" podStartSLOduration=127.451269036 podStartE2EDuration="2m7.451269036s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:30.403473364 +0000 UTC m=+151.689225334" watchObservedRunningTime="2025-12-06 06:27:30.451269036 +0000 UTC m=+151.737021016" Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.452988 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-4rlt6" podStartSLOduration=127.452978802 podStartE2EDuration="2m7.452978802s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:30.422944194 +0000 UTC m=+151.708696164" watchObservedRunningTime="2025-12-06 06:27:30.452978802 +0000 UTC m=+151.738730762" Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.480134 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-5rbww" podStartSLOduration=127.480115701 podStartE2EDuration="2m7.480115701s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:30.47786138 +0000 UTC m=+151.763613330" watchObservedRunningTime="2025-12-06 06:27:30.480115701 +0000 UTC m=+151.765867671" Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.480328 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:30 crc kubenswrapper[4823]: E1206 06:27:30.484312 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:30.984292425 +0000 UTC m=+152.270044465 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.508386 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8slc" podStartSLOduration=127.508269658 podStartE2EDuration="2m7.508269658s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:30.501312548 +0000 UTC m=+151.787064518" watchObservedRunningTime="2025-12-06 06:27:30.508269658 +0000 UTC m=+151.794021628" Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.576960 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-4rlt6" Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.577608 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lq76q" podStartSLOduration=127.577586995 podStartE2EDuration="2m7.577586995s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:30.536751013 +0000 UTC m=+151.822503053" watchObservedRunningTime="2025-12-06 06:27:30.577586995 +0000 UTC m=+151.863338965" Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.577968 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v74ml" podStartSLOduration=127.577963046 podStartE2EDuration="2m7.577963046s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:30.576803454 +0000 UTC m=+151.862555434" watchObservedRunningTime="2025-12-06 06:27:30.577963046 +0000 UTC m=+151.863715006" Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.581683 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:30 crc kubenswrapper[4823]: E1206 06:27:30.582140 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:31.082122109 +0000 UTC m=+152.367874069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.586441 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:30 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:30 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:30 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.586501 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.683141 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:30 crc kubenswrapper[4823]: E1206 06:27:30.683479 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:31.183463528 +0000 UTC m=+152.469215488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.784496 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:30 crc kubenswrapper[4823]: E1206 06:27:30.784635 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:31.284611933 +0000 UTC m=+152.570363893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.785074 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:30 crc kubenswrapper[4823]: E1206 06:27:30.785379 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:31.285367703 +0000 UTC m=+152.571119663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.885767 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:30 crc kubenswrapper[4823]: E1206 06:27:30.886054 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:31.386006564 +0000 UTC m=+152.671758534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:30 crc kubenswrapper[4823]: I1206 06:27:30.988083 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:30 crc kubenswrapper[4823]: E1206 06:27:30.989005 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:31.488990437 +0000 UTC m=+152.774742387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.099211 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:31 crc kubenswrapper[4823]: E1206 06:27:31.099520 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:31.599501586 +0000 UTC m=+152.885253546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.202335 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:31 crc kubenswrapper[4823]: E1206 06:27:31.203409 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:31.703396875 +0000 UTC m=+152.989148835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.307084 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:31 crc kubenswrapper[4823]: E1206 06:27:31.307711 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:31.807621233 +0000 UTC m=+153.093373193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.308093 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:31 crc kubenswrapper[4823]: E1206 06:27:31.308592 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:31.8085825 +0000 UTC m=+153.094334460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.426834 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:31 crc kubenswrapper[4823]: E1206 06:27:31.427123 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:31.927098657 +0000 UTC m=+153.212850617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.473572 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cjc4q" event={"ID":"9865b222-071e-470e-8731-aed26f1bce5b","Type":"ContainerStarted","Data":"cf1e03f63fa1c0b549fd01900f8aa2dbc73c4bc483473dcf3fad3d016cf53090"} Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.488907 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vmw25" event={"ID":"87cf77eb-9dc0-43ee-a48b-78a38378b0f1","Type":"ContainerStarted","Data":"5ffda543a8f0eb1656325cf45231e45d11d9be3f3866107cffd4336121a5b6a0"} Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.504303 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nbvlv" event={"ID":"7768dedd-2688-4975-ac80-cc98b354e7a5","Type":"ContainerStarted","Data":"47f175f2a9473b86a8ffd920680c8eccccabe4d1c39f0e07554a3a25a6755b4e"} Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.517534 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cjc4q" podStartSLOduration=128.517515389 podStartE2EDuration="2m8.517515389s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:31.514839876 +0000 UTC m=+152.800591836" watchObservedRunningTime="2025-12-06 06:27:31.517515389 +0000 UTC m=+152.803267349" Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.522066 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6bzcc" event={"ID":"54f75ed5-6c32-4667-8209-bbf6ffb81043","Type":"ContainerStarted","Data":"134b30d11fae0b43f9b2891792e298addcb4ea20d6fe03758abd0857335f9ea8"} Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.522277 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6bzcc" Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.526077 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mghr2" event={"ID":"9e474bbe-7252-4213-be37-0916c3c6c1c0","Type":"ContainerStarted","Data":"caaf876e68cd45a9d4378c099b75dec1ab19b635a45baa5c6148a7dc56cc27d0"} Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.526308 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mghr2" Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.528085 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:31 crc kubenswrapper[4823]: E1206 06:27:31.528571 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:32.028551839 +0000 UTC m=+153.314303869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.533693 4823 generic.go:334] "Generic (PLEG): container finished" podID="3a0d23be-9267-406c-a67e-6970f6e8b922" containerID="c7600f2178578a8571e68ca8d8008edbc44b93c363d1d14d3d3d649a26eb2b9f" exitCode=0 Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.533752 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" event={"ID":"3a0d23be-9267-406c-a67e-6970f6e8b922","Type":"ContainerDied","Data":"c7600f2178578a8571e68ca8d8008edbc44b93c363d1d14d3d3d649a26eb2b9f"} Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.559962 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6bzcc" podStartSLOduration=128.559947924 podStartE2EDuration="2m8.559947924s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:31.559052 +0000 UTC m=+152.844803960" watchObservedRunningTime="2025-12-06 06:27:31.559947924 +0000 UTC m=+152.845699884" Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.582154 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:31 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:31 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:31 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.582235 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.592598 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bmbbn" event={"ID":"4351e984-b1eb-4ffc-96b8-5e37536b79da","Type":"ContainerStarted","Data":"22307c256bacd85d7d5ea5cb64dbb6d9d4eae258247b14f2fdfe57e2d5139ef5"} Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.604927 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9bc9c" event={"ID":"96b2e694-650a-4367-966b-4b53e8313c65","Type":"ContainerStarted","Data":"e7411dfca0d0fa1b865a48ab535b43744772198345a29a06e032738118ddd1fc"} Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.605455 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9bc9c" Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.612171 4823 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-9bc9c container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.612236 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9bc9c" podUID="96b2e694-650a-4367-966b-4b53e8313c65" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.629046 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:31 crc kubenswrapper[4823]: E1206 06:27:31.629161 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:32.129141739 +0000 UTC m=+153.414893709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.629311 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:31 crc kubenswrapper[4823]: E1206 06:27:31.630467 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:32.130459254 +0000 UTC m=+153.416211214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.637185 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rq4rk" event={"ID":"45d8cc05-e612-40f5-a5dd-57d7abbadc51","Type":"ContainerStarted","Data":"61026c27c5eb81d177174f1d29e0ba7f3b43b5d66eccc3b24908147e671e2c75"} Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.644101 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mghr2" podStartSLOduration=128.644040674 podStartE2EDuration="2m8.644040674s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:31.637080445 +0000 UTC m=+152.922832405" watchObservedRunningTime="2025-12-06 06:27:31.644040674 +0000 UTC m=+152.929792634" Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.651198 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4hh5t" event={"ID":"2f7a2cf2-a349-4491-88b1-c46345bc28a7","Type":"ContainerStarted","Data":"0025402e609984ba2c5ab8673261be6b762222719c38b5ea9e02b9367fed59f5"} Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.653845 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-pr5dh" event={"ID":"66a38981-0dd2-4411-897f-4289cce13349","Type":"ContainerStarted","Data":"ffb2abc19c629186a47bdf5979b8f6d63e977eeb72b7516d4c2d6db78372ac8a"} Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.655064 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bf79s" event={"ID":"872f6308-0267-41cd-bc92-e401f9d7cda9","Type":"ContainerStarted","Data":"93268231ecbfc1ec8853279c22a3da09547c14e8106eb25957df009f5de62c8a"} Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.658716 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qtfn6" event={"ID":"b0483a63-0788-40f1-9d6e-6f3377195729","Type":"ContainerStarted","Data":"f1b47ab715be1022c332e151eaa4110a5fdd23d9f4f33800e4dd719f943e40cd"} Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.662495 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9bc9c" podStartSLOduration=128.662482536 podStartE2EDuration="2m8.662482536s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:31.66078657 +0000 UTC m=+152.946538540" watchObservedRunningTime="2025-12-06 06:27:31.662482536 +0000 UTC m=+152.948234496" Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.667065 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" event={"ID":"f5f016d4-304f-4e8b-b0d8-9445bd44f6d2","Type":"ContainerStarted","Data":"d6287c5c3608a3d611eefed6b65fb49498ad61c09c564432ea5f9925011e48c8"} Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.668472 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-h8lzb" event={"ID":"13d85d66-b028-416f-83be-0235701d9b1c","Type":"ContainerStarted","Data":"814fb3eec3726a59c5bba3148e33fb289d7532e05ec1bcf0ad326a2eefac0f7b"} Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.687156 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cd5rf" event={"ID":"69358a8e-999e-4fe5-881a-5868db35885c","Type":"ContainerStarted","Data":"b3a9b638fd9a784c8f556839b493061b7f116a4a4546ca84d89208fc86a72ef9"} Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.720344 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bf79s" podStartSLOduration=128.720325731 podStartE2EDuration="2m8.720325731s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:31.719812207 +0000 UTC m=+153.005564177" watchObservedRunningTime="2025-12-06 06:27:31.720325731 +0000 UTC m=+153.006077691" Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.722969 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2cjj5" event={"ID":"53a18f23-f29e-43ba-8568-855cb4550b7b","Type":"ContainerStarted","Data":"ca4111359d12dbd3b574d410ddf9859b88173b7564fb391bab9403faceacb154"} Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.724171 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2cjj5" Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.748915 4823 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2cjj5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.748980 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2cjj5" podUID="53a18f23-f29e-43ba-8568-855cb4550b7b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.750909 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:31 crc kubenswrapper[4823]: E1206 06:27:31.752490 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:32.252465737 +0000 UTC m=+153.538217767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.778913 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bqbf4" event={"ID":"1a3167c5-aeb3-4429-aca5-068eb77856f2","Type":"ContainerStarted","Data":"19e5b6aefddc8b686c33edbc842b31a216cdf262ebd4d46567439d66e58af5ba"} Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.804428 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnpns" event={"ID":"b8b36d44-6eab-4f81-bd7c-a0887b7ba1bc","Type":"ContainerStarted","Data":"d42c0f58956bb9b3a488270b5d2990e91d26433a2a960621b506a2a7405a1496"} Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.842049 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4drs2" event={"ID":"dfe3972b-7772-422d-8e45-fcf803a7d302","Type":"ContainerStarted","Data":"1e96a08fdf6f0241cd8cc962dbdf6f4692afeaefc92dff8e4c875d09dcad3468"} Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.843796 4823 patch_prober.go:28] interesting pod/console-operator-58897d9998-j27ls container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.843831 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j27ls" podUID="15cb22b1-04c0-45b6-81fa-9cb976a1aecb" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.844398 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-96764" Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.852976 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:31 crc kubenswrapper[4823]: E1206 06:27:31.854605 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:32.354593737 +0000 UTC m=+153.640345697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.884425 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4hh5t" podStartSLOduration=128.884410609 podStartE2EDuration="2m8.884410609s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:31.883452003 +0000 UTC m=+153.169203963" watchObservedRunningTime="2025-12-06 06:27:31.884410609 +0000 UTC m=+153.170162559" Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.886137 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qtfn6" podStartSLOduration=128.886130616 podStartE2EDuration="2m8.886130616s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:31.813628092 +0000 UTC m=+153.099380052" watchObservedRunningTime="2025-12-06 06:27:31.886130616 +0000 UTC m=+153.171882576" Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.900902 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2cjj5" podStartSLOduration=128.900890478 podStartE2EDuration="2m8.900890478s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:31.897448254 +0000 UTC m=+153.183200204" watchObservedRunningTime="2025-12-06 06:27:31.900890478 +0000 UTC m=+153.186642438" Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.914082 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjkks" podStartSLOduration=128.914064167 podStartE2EDuration="2m8.914064167s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:31.911463866 +0000 UTC m=+153.197215846" watchObservedRunningTime="2025-12-06 06:27:31.914064167 +0000 UTC m=+153.199816137" Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.925041 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nsc26" podStartSLOduration=128.925021645 podStartE2EDuration="2m8.925021645s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:31.924215213 +0000 UTC m=+153.209967173" watchObservedRunningTime="2025-12-06 06:27:31.925021645 +0000 UTC m=+153.210773605" Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.948743 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" podStartSLOduration=128.948729361 podStartE2EDuration="2m8.948729361s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:31.945960525 +0000 UTC m=+153.231712485" watchObservedRunningTime="2025-12-06 06:27:31.948729361 +0000 UTC m=+153.234481321" Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.954185 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:31 crc kubenswrapper[4823]: E1206 06:27:31.955808 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:32.455790813 +0000 UTC m=+153.741542783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.973212 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-96764" podStartSLOduration=128.973191167 podStartE2EDuration="2m8.973191167s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:31.969841266 +0000 UTC m=+153.255593236" watchObservedRunningTime="2025-12-06 06:27:31.973191167 +0000 UTC m=+153.258943127" Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.983008 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bqbf4" podStartSLOduration=128.982992964 podStartE2EDuration="2m8.982992964s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:31.982613623 +0000 UTC m=+153.268365583" watchObservedRunningTime="2025-12-06 06:27:31.982992964 +0000 UTC m=+153.268744924" Dec 06 06:27:31 crc kubenswrapper[4823]: I1206 06:27:31.997879 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnpns" podStartSLOduration=128.997859839 podStartE2EDuration="2m8.997859839s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:31.994171128 +0000 UTC m=+153.279923118" watchObservedRunningTime="2025-12-06 06:27:31.997859839 +0000 UTC m=+153.283611789" Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.012194 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-cd5rf" podStartSLOduration=8.012177698 podStartE2EDuration="8.012177698s" podCreationTimestamp="2025-12-06 06:27:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:32.009716421 +0000 UTC m=+153.295468381" watchObservedRunningTime="2025-12-06 06:27:32.012177698 +0000 UTC m=+153.297929658" Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.024264 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-77z7z" podStartSLOduration=129.024247607 podStartE2EDuration="2m9.024247607s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:32.023889857 +0000 UTC m=+153.309641817" watchObservedRunningTime="2025-12-06 06:27:32.024247607 +0000 UTC m=+153.309999567" Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.040734 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-h8lzb" podStartSLOduration=129.040718646 podStartE2EDuration="2m9.040718646s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:32.038208707 +0000 UTC m=+153.323960667" watchObservedRunningTime="2025-12-06 06:27:32.040718646 +0000 UTC m=+153.326470606" Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.055964 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:32 crc kubenswrapper[4823]: E1206 06:27:32.056279 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:32.556263109 +0000 UTC m=+153.842015069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.061626 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.061833 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.062851 4823 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-cwrr7 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.062901 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" podUID="f5f016d4-304f-4e8b-b0d8-9445bd44f6d2" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.176698 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:32 crc kubenswrapper[4823]: E1206 06:27:32.177136 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:32.677119 +0000 UTC m=+153.962870960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.218204 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v74ml" Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.278888 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:32 crc kubenswrapper[4823]: E1206 06:27:32.279217 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:32.77920634 +0000 UTC m=+154.064958300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.381716 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:32 crc kubenswrapper[4823]: E1206 06:27:32.382088 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:32.882070871 +0000 UTC m=+154.167822831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.482974 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:32 crc kubenswrapper[4823]: E1206 06:27:32.483576 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:32.983564735 +0000 UTC m=+154.269316695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.526827 4823 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-mghr2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.526896 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mghr2" podUID="9e474bbe-7252-4213-be37-0916c3c6c1c0" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.618174 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:32 crc kubenswrapper[4823]: E1206 06:27:32.618480 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:33.118466498 +0000 UTC m=+154.404218458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.620095 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:32 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:32 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:32 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.620126 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.723590 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:32 crc kubenswrapper[4823]: E1206 06:27:32.723942 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:33.2239291 +0000 UTC m=+154.509681060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.846277 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:32 crc kubenswrapper[4823]: E1206 06:27:32.846440 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:33.346416195 +0000 UTC m=+154.632168155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.846749 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:32 crc kubenswrapper[4823]: E1206 06:27:32.847089 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:33.347080193 +0000 UTC m=+154.632832143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.870947 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-pr5dh" event={"ID":"66a38981-0dd2-4411-897f-4289cce13349","Type":"ContainerStarted","Data":"5a598dbb3c8d46008dca12bdd9fe87ceb0d2fe7d1273746a11dd68d44cfd7adf"} Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.914553 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nbvlv" event={"ID":"7768dedd-2688-4975-ac80-cc98b354e7a5","Type":"ContainerStarted","Data":"869700dc617a944e765d5c654b227fddc97cf8c165292b769ad58c90a9be132c"} Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.918944 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-24zhr" event={"ID":"27d936b5-b671-4d17-b9ee-bff849246c5a","Type":"ContainerStarted","Data":"c51992fa8b4f0bf8d9d8293933625666704ab25fe95a487175cfe694dca533bd"} Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.920647 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4drs2" event={"ID":"dfe3972b-7772-422d-8e45-fcf803a7d302","Type":"ContainerStarted","Data":"7a3f1ddcdb1b52fd392c95236b47e6b1384cf583880f886dcfcb28c8949ac5bd"} Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.923479 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vmw25" event={"ID":"87cf77eb-9dc0-43ee-a48b-78a38378b0f1","Type":"ContainerStarted","Data":"e49c055fe17abf5e35a8dc6734923042376d5e2f58daa938de7230b4bf70bb02"} Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.924615 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rq4rk" event={"ID":"45d8cc05-e612-40f5-a5dd-57d7abbadc51","Type":"ContainerStarted","Data":"28c308d5e7baf4bdf53af7cc8d93d4075996ce106f79296a0837294fb3ed85bd"} Dec 06 06:27:32 crc kubenswrapper[4823]: I1206 06:27:32.950464 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:32 crc kubenswrapper[4823]: E1206 06:27:32.951025 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:33.451005153 +0000 UTC m=+154.736757113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.064772 4823 patch_prober.go:28] interesting pod/console-operator-58897d9998-j27ls container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.064831 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j27ls" podUID="15cb22b1-04c0-45b6-81fa-9cb976a1aecb" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.065363 4823 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2cjj5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.065421 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2cjj5" podUID="53a18f23-f29e-43ba-8568-855cb4550b7b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.065450 4823 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-9bc9c container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.065480 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9bc9c" podUID="96b2e694-650a-4367-966b-4b53e8313c65" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.066453 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:33 crc kubenswrapper[4823]: E1206 06:27:33.068141 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:33.568118702 +0000 UTC m=+154.853870662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.179787 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:33 crc kubenswrapper[4823]: E1206 06:27:33.180203 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:33.680173304 +0000 UTC m=+154.965925264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.180858 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:33 crc kubenswrapper[4823]: E1206 06:27:33.182708 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:33.682693892 +0000 UTC m=+154.968446032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.182792 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4drs2" podStartSLOduration=130.182775624 podStartE2EDuration="2m10.182775624s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:33.118204036 +0000 UTC m=+154.403955996" watchObservedRunningTime="2025-12-06 06:27:33.182775624 +0000 UTC m=+154.468527584" Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.277525 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bmbbn" podStartSLOduration=130.277501974 podStartE2EDuration="2m10.277501974s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:33.188373627 +0000 UTC m=+154.474125597" watchObservedRunningTime="2025-12-06 06:27:33.277501974 +0000 UTC m=+154.563253934" Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.278381 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-rq4rk" podStartSLOduration=130.278375428 podStartE2EDuration="2m10.278375428s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:33.264896721 +0000 UTC m=+154.550648681" watchObservedRunningTime="2025-12-06 06:27:33.278375428 +0000 UTC m=+154.564127388" Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.288734 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:33 crc kubenswrapper[4823]: E1206 06:27:33.288923 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:33.788890634 +0000 UTC m=+155.074642594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.289114 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:33 crc kubenswrapper[4823]: E1206 06:27:33.289457 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:33.789448419 +0000 UTC m=+155.075200379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.390505 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:33 crc kubenswrapper[4823]: E1206 06:27:33.390722 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:33.890694056 +0000 UTC m=+155.176446026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.391205 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:33 crc kubenswrapper[4823]: E1206 06:27:33.391520 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:33.891509908 +0000 UTC m=+155.177261868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.492003 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:33 crc kubenswrapper[4823]: E1206 06:27:33.492125 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:33.992107038 +0000 UTC m=+155.277858998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.492143 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:33 crc kubenswrapper[4823]: E1206 06:27:33.492408 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:33.992399636 +0000 UTC m=+155.278151596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.593246 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:33 crc kubenswrapper[4823]: E1206 06:27:33.593688 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:34.093650613 +0000 UTC m=+155.379402583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.593737 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:33 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:33 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:33 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.593774 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.677308 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gxh4w"] Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.678499 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxh4w" Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.685297 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.696410 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:33 crc kubenswrapper[4823]: E1206 06:27:33.696724 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:34.196713569 +0000 UTC m=+155.482465529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.711762 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gxh4w"] Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.797109 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:33 crc kubenswrapper[4823]: E1206 06:27:33.797331 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:34.297285487 +0000 UTC m=+155.583037447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.797705 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ade1bd9-4ca5-4910-8989-09b55a67bd0e-catalog-content\") pod \"certified-operators-gxh4w\" (UID: \"6ade1bd9-4ca5-4910-8989-09b55a67bd0e\") " pod="openshift-marketplace/certified-operators-gxh4w" Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.797743 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ade1bd9-4ca5-4910-8989-09b55a67bd0e-utilities\") pod \"certified-operators-gxh4w\" (UID: \"6ade1bd9-4ca5-4910-8989-09b55a67bd0e\") " pod="openshift-marketplace/certified-operators-gxh4w" Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.798798 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.798859 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzcsx\" (UniqueName: \"kubernetes.io/projected/6ade1bd9-4ca5-4910-8989-09b55a67bd0e-kube-api-access-nzcsx\") pod \"certified-operators-gxh4w\" (UID: \"6ade1bd9-4ca5-4910-8989-09b55a67bd0e\") " pod="openshift-marketplace/certified-operators-gxh4w" Dec 06 06:27:33 crc kubenswrapper[4823]: E1206 06:27:33.799270 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:34.299262941 +0000 UTC m=+155.585014901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.880606 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2sgg7"] Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.882281 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2sgg7" Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.885391 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.899556 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:33 crc kubenswrapper[4823]: E1206 06:27:33.899646 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:34.399626954 +0000 UTC m=+155.685378914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.899710 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.899754 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/130f260b-b329-499b-a6ff-b15b96d8bf7d-utilities\") pod \"community-operators-2sgg7\" (UID: \"130f260b-b329-499b-a6ff-b15b96d8bf7d\") " pod="openshift-marketplace/community-operators-2sgg7" Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.899788 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzcsx\" (UniqueName: \"kubernetes.io/projected/6ade1bd9-4ca5-4910-8989-09b55a67bd0e-kube-api-access-nzcsx\") pod \"certified-operators-gxh4w\" (UID: \"6ade1bd9-4ca5-4910-8989-09b55a67bd0e\") " pod="openshift-marketplace/certified-operators-gxh4w" Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.899865 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vxqs\" (UniqueName: \"kubernetes.io/projected/130f260b-b329-499b-a6ff-b15b96d8bf7d-kube-api-access-8vxqs\") pod \"community-operators-2sgg7\" (UID: \"130f260b-b329-499b-a6ff-b15b96d8bf7d\") " pod="openshift-marketplace/community-operators-2sgg7" Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.899954 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ade1bd9-4ca5-4910-8989-09b55a67bd0e-catalog-content\") pod \"certified-operators-gxh4w\" (UID: \"6ade1bd9-4ca5-4910-8989-09b55a67bd0e\") " pod="openshift-marketplace/certified-operators-gxh4w" Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.900015 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ade1bd9-4ca5-4910-8989-09b55a67bd0e-utilities\") pod \"certified-operators-gxh4w\" (UID: \"6ade1bd9-4ca5-4910-8989-09b55a67bd0e\") " pod="openshift-marketplace/certified-operators-gxh4w" Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.900043 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/130f260b-b329-499b-a6ff-b15b96d8bf7d-catalog-content\") pod \"community-operators-2sgg7\" (UID: \"130f260b-b329-499b-a6ff-b15b96d8bf7d\") " pod="openshift-marketplace/community-operators-2sgg7" Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.901167 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ade1bd9-4ca5-4910-8989-09b55a67bd0e-utilities\") pod \"certified-operators-gxh4w\" (UID: \"6ade1bd9-4ca5-4910-8989-09b55a67bd0e\") " pod="openshift-marketplace/certified-operators-gxh4w" Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.901336 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ade1bd9-4ca5-4910-8989-09b55a67bd0e-catalog-content\") pod \"certified-operators-gxh4w\" (UID: \"6ade1bd9-4ca5-4910-8989-09b55a67bd0e\") " pod="openshift-marketplace/certified-operators-gxh4w" Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.901384 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2sgg7"] Dec 06 06:27:33 crc kubenswrapper[4823]: E1206 06:27:33.901835 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:34.401811414 +0000 UTC m=+155.687563584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.936873 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzcsx\" (UniqueName: \"kubernetes.io/projected/6ade1bd9-4ca5-4910-8989-09b55a67bd0e-kube-api-access-nzcsx\") pod \"certified-operators-gxh4w\" (UID: \"6ade1bd9-4ca5-4910-8989-09b55a67bd0e\") " pod="openshift-marketplace/certified-operators-gxh4w" Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.972345 4823 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-mghr2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.972398 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mghr2" podUID="9e474bbe-7252-4213-be37-0916c3c6c1c0" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.976359 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" event={"ID":"3a0d23be-9267-406c-a67e-6970f6e8b922","Type":"ContainerStarted","Data":"37d9a02fa70997be4d5df80dd3b5eb7eb11e4510bcabfdaf8d3fd43eacc88ea9"} Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.979855 4823 generic.go:334] "Generic (PLEG): container finished" podID="95a0336c-e3d8-4290-a0bb-62d7a7f357ba" containerID="b7d08a7b792aed99b3b7b4bc1d85d1df3f88459353465f30190b1fac28347c29" exitCode=0 Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.980100 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-czmn9" event={"ID":"95a0336c-e3d8-4290-a0bb-62d7a7f357ba","Type":"ContainerDied","Data":"b7d08a7b792aed99b3b7b4bc1d85d1df3f88459353465f30190b1fac28347c29"} Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.980680 4823 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2cjj5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Dec 06 06:27:33 crc kubenswrapper[4823]: I1206 06:27:33.980727 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2cjj5" podUID="53a18f23-f29e-43ba-8568-855cb4550b7b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.001222 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.001478 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/130f260b-b329-499b-a6ff-b15b96d8bf7d-catalog-content\") pod \"community-operators-2sgg7\" (UID: \"130f260b-b329-499b-a6ff-b15b96d8bf7d\") " pod="openshift-marketplace/community-operators-2sgg7" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.001570 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/130f260b-b329-499b-a6ff-b15b96d8bf7d-utilities\") pod \"community-operators-2sgg7\" (UID: \"130f260b-b329-499b-a6ff-b15b96d8bf7d\") " pod="openshift-marketplace/community-operators-2sgg7" Dec 06 06:27:34 crc kubenswrapper[4823]: E1206 06:27:34.001974 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:34.50193813 +0000 UTC m=+155.787690090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.002563 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/130f260b-b329-499b-a6ff-b15b96d8bf7d-catalog-content\") pod \"community-operators-2sgg7\" (UID: \"130f260b-b329-499b-a6ff-b15b96d8bf7d\") " pod="openshift-marketplace/community-operators-2sgg7" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.002570 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/130f260b-b329-499b-a6ff-b15b96d8bf7d-utilities\") pod \"community-operators-2sgg7\" (UID: \"130f260b-b329-499b-a6ff-b15b96d8bf7d\") " pod="openshift-marketplace/community-operators-2sgg7" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.002860 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxh4w" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.002936 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vxqs\" (UniqueName: \"kubernetes.io/projected/130f260b-b329-499b-a6ff-b15b96d8bf7d-kube-api-access-8vxqs\") pod \"community-operators-2sgg7\" (UID: \"130f260b-b329-499b-a6ff-b15b96d8bf7d\") " pod="openshift-marketplace/community-operators-2sgg7" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.030374 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vmw25" podStartSLOduration=131.030357744 podStartE2EDuration="2m11.030357744s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:34.030188159 +0000 UTC m=+155.315940119" watchObservedRunningTime="2025-12-06 06:27:34.030357744 +0000 UTC m=+155.316109704" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.035105 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vxqs\" (UniqueName: \"kubernetes.io/projected/130f260b-b329-499b-a6ff-b15b96d8bf7d-kube-api-access-8vxqs\") pod \"community-operators-2sgg7\" (UID: \"130f260b-b329-499b-a6ff-b15b96d8bf7d\") " pod="openshift-marketplace/community-operators-2sgg7" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.041138 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-nbvlv" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.083598 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fqbpf"] Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.085151 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fqbpf" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.104074 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.104129 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/924b1003-afd5-49e2-883d-12b314c93876-catalog-content\") pod \"certified-operators-fqbpf\" (UID: \"924b1003-afd5-49e2-883d-12b314c93876\") " pod="openshift-marketplace/certified-operators-fqbpf" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.104165 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/924b1003-afd5-49e2-883d-12b314c93876-utilities\") pod \"certified-operators-fqbpf\" (UID: \"924b1003-afd5-49e2-883d-12b314c93876\") " pod="openshift-marketplace/certified-operators-fqbpf" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.104196 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmkcx\" (UniqueName: \"kubernetes.io/projected/924b1003-afd5-49e2-883d-12b314c93876-kube-api-access-mmkcx\") pod \"certified-operators-fqbpf\" (UID: \"924b1003-afd5-49e2-883d-12b314c93876\") " pod="openshift-marketplace/certified-operators-fqbpf" Dec 06 06:27:34 crc kubenswrapper[4823]: E1206 06:27:34.104550 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:34.604534654 +0000 UTC m=+155.890286694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.157750 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fqbpf"] Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.201462 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-pr5dh" podStartSLOduration=131.201444583 podStartE2EDuration="2m11.201444583s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:34.195643835 +0000 UTC m=+155.481395805" watchObservedRunningTime="2025-12-06 06:27:34.201444583 +0000 UTC m=+155.487196543" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.226638 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2sgg7" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.226911 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.227225 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/924b1003-afd5-49e2-883d-12b314c93876-catalog-content\") pod \"certified-operators-fqbpf\" (UID: \"924b1003-afd5-49e2-883d-12b314c93876\") " pod="openshift-marketplace/certified-operators-fqbpf" Dec 06 06:27:34 crc kubenswrapper[4823]: E1206 06:27:34.227302 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:34.727278026 +0000 UTC m=+156.013030026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.227363 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/924b1003-afd5-49e2-883d-12b314c93876-utilities\") pod \"certified-operators-fqbpf\" (UID: \"924b1003-afd5-49e2-883d-12b314c93876\") " pod="openshift-marketplace/certified-operators-fqbpf" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.227430 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmkcx\" (UniqueName: \"kubernetes.io/projected/924b1003-afd5-49e2-883d-12b314c93876-kube-api-access-mmkcx\") pod \"certified-operators-fqbpf\" (UID: \"924b1003-afd5-49e2-883d-12b314c93876\") " pod="openshift-marketplace/certified-operators-fqbpf" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.227734 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/924b1003-afd5-49e2-883d-12b314c93876-catalog-content\") pod \"certified-operators-fqbpf\" (UID: \"924b1003-afd5-49e2-883d-12b314c93876\") " pod="openshift-marketplace/certified-operators-fqbpf" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.231948 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/924b1003-afd5-49e2-883d-12b314c93876-utilities\") pod \"certified-operators-fqbpf\" (UID: \"924b1003-afd5-49e2-883d-12b314c93876\") " pod="openshift-marketplace/certified-operators-fqbpf" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.329352 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:34 crc kubenswrapper[4823]: E1206 06:27:34.329689 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:34.829656864 +0000 UTC m=+156.115408824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.430397 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-nbvlv" podStartSLOduration=10.430378957 podStartE2EDuration="10.430378957s" podCreationTimestamp="2025-12-06 06:27:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:34.428900417 +0000 UTC m=+155.714652377" watchObservedRunningTime="2025-12-06 06:27:34.430378957 +0000 UTC m=+155.716130917" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.512215 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:34 crc kubenswrapper[4823]: E1206 06:27:34.512683 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:35.012649427 +0000 UTC m=+156.298401387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.518965 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xx2np"] Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.520216 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xx2np" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.537098 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmkcx\" (UniqueName: \"kubernetes.io/projected/924b1003-afd5-49e2-883d-12b314c93876-kube-api-access-mmkcx\") pod \"certified-operators-fqbpf\" (UID: \"924b1003-afd5-49e2-883d-12b314c93876\") " pod="openshift-marketplace/certified-operators-fqbpf" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.575236 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xx2np"] Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.591339 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:34 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:34 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:34 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.591395 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.614539 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.614587 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrhwf\" (UniqueName: \"kubernetes.io/projected/c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1-kube-api-access-lrhwf\") pod \"community-operators-xx2np\" (UID: \"c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1\") " pod="openshift-marketplace/community-operators-xx2np" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.614609 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1-utilities\") pod \"community-operators-xx2np\" (UID: \"c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1\") " pod="openshift-marketplace/community-operators-xx2np" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.614705 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1-catalog-content\") pod \"community-operators-xx2np\" (UID: \"c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1\") " pod="openshift-marketplace/community-operators-xx2np" Dec 06 06:27:34 crc kubenswrapper[4823]: E1206 06:27:34.615004 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:35.114990313 +0000 UTC m=+156.400742273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.706156 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fqbpf" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.721034 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.721287 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1-catalog-content\") pod \"community-operators-xx2np\" (UID: \"c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1\") " pod="openshift-marketplace/community-operators-xx2np" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.721354 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrhwf\" (UniqueName: \"kubernetes.io/projected/c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1-kube-api-access-lrhwf\") pod \"community-operators-xx2np\" (UID: \"c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1\") " pod="openshift-marketplace/community-operators-xx2np" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.721380 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1-utilities\") pod \"community-operators-xx2np\" (UID: \"c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1\") " pod="openshift-marketplace/community-operators-xx2np" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.722469 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1-utilities\") pod \"community-operators-xx2np\" (UID: \"c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1\") " pod="openshift-marketplace/community-operators-xx2np" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.722474 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1-catalog-content\") pod \"community-operators-xx2np\" (UID: \"c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1\") " pod="openshift-marketplace/community-operators-xx2np" Dec 06 06:27:34 crc kubenswrapper[4823]: E1206 06:27:34.722546 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:35.222528491 +0000 UTC m=+156.508280451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.824888 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:34 crc kubenswrapper[4823]: E1206 06:27:34.825599 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:35.325586367 +0000 UTC m=+156.611338327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.882445 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrhwf\" (UniqueName: \"kubernetes.io/projected/c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1-kube-api-access-lrhwf\") pod \"community-operators-xx2np\" (UID: \"c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1\") " pod="openshift-marketplace/community-operators-xx2np" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.932143 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xx2np" Dec 06 06:27:34 crc kubenswrapper[4823]: I1206 06:27:34.932199 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:34 crc kubenswrapper[4823]: E1206 06:27:34.932699 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:35.432677623 +0000 UTC m=+156.718429583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:35 crc kubenswrapper[4823]: I1206 06:27:35.035490 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:35 crc kubenswrapper[4823]: E1206 06:27:35.035904 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:35.535889214 +0000 UTC m=+156.821641174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:35 crc kubenswrapper[4823]: I1206 06:27:35.073195 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" event={"ID":"3a0d23be-9267-406c-a67e-6970f6e8b922","Type":"ContainerStarted","Data":"d03dfcc9d4628a992312690ece12f08bc053131067110c492ad3f45b07b61c7d"} Dec 06 06:27:35 crc kubenswrapper[4823]: I1206 06:27:35.138939 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:35 crc kubenswrapper[4823]: E1206 06:27:35.139908 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:35.639892676 +0000 UTC m=+156.925644636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:35 crc kubenswrapper[4823]: I1206 06:27:35.241492 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:35 crc kubenswrapper[4823]: E1206 06:27:35.241915 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:35.741899294 +0000 UTC m=+157.027651254 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:35 crc kubenswrapper[4823]: I1206 06:27:35.345315 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:35 crc kubenswrapper[4823]: E1206 06:27:35.345523 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:35.845497635 +0000 UTC m=+157.131249605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:35 crc kubenswrapper[4823]: I1206 06:27:35.345622 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:35 crc kubenswrapper[4823]: E1206 06:27:35.345929 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:35.845921066 +0000 UTC m=+157.131673036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:35 crc kubenswrapper[4823]: I1206 06:27:35.451631 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:35 crc kubenswrapper[4823]: E1206 06:27:35.451936 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:35.951915643 +0000 UTC m=+157.237667603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:35 crc kubenswrapper[4823]: I1206 06:27:35.575573 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:35 crc kubenswrapper[4823]: E1206 06:27:35.576042 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:36.076025862 +0000 UTC m=+157.361777822 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:35 crc kubenswrapper[4823]: I1206 06:27:35.617613 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:35 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:35 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:35 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:35 crc kubenswrapper[4823]: I1206 06:27:35.618125 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:35 crc kubenswrapper[4823]: I1206 06:27:35.692646 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:35 crc kubenswrapper[4823]: E1206 06:27:35.693061 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:36.193043839 +0000 UTC m=+157.478795799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:35 crc kubenswrapper[4823]: I1206 06:27:35.793967 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:35 crc kubenswrapper[4823]: E1206 06:27:35.794405 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:36.294389408 +0000 UTC m=+157.580141368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:35 crc kubenswrapper[4823]: I1206 06:27:35.894784 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:35 crc kubenswrapper[4823]: E1206 06:27:35.895247 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:36.395228354 +0000 UTC m=+157.680980314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.003620 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:36 crc kubenswrapper[4823]: E1206 06:27:36.004062 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:36.504045727 +0000 UTC m=+157.789797687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.106467 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:36 crc kubenswrapper[4823]: E1206 06:27:36.106885 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:36.606866547 +0000 UTC m=+157.892618507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.107114 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.107173 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.200718 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-24zhr" event={"ID":"27d936b5-b671-4d17-b9ee-bff849246c5a","Type":"ContainerStarted","Data":"b56b7ae9524c37a0c191a8b7accf0e6959acc505d1509fafb2e26ac9ad0f48ce"} Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.208418 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:36 crc kubenswrapper[4823]: E1206 06:27:36.210971 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:36.710952902 +0000 UTC m=+157.996704872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.341860 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:36 crc kubenswrapper[4823]: E1206 06:27:36.342449 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:36.842431732 +0000 UTC m=+158.128183692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.473539 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:36 crc kubenswrapper[4823]: E1206 06:27:36.473930 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:36.973917372 +0000 UTC m=+158.259669332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.607260 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:36 crc kubenswrapper[4823]: E1206 06:27:36.607587 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:37.107570792 +0000 UTC m=+158.393322752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.619916 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:36 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:36 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:36 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.619972 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.700805 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-9g789 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.700860 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9g789" podUID="ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.703042 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-9g789 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.703066 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9g789" podUID="ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.709795 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:36 crc kubenswrapper[4823]: E1206 06:27:36.710138 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:37.210126234 +0000 UTC m=+158.495878184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.790859 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-j27ls" Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.798483 4823 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-96764 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.798624 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-96764" podUID="7df56bd6-4c2c-4432-b312-51019bf0f458" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.800119 4823 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-96764 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.800179 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-96764" podUID="7df56bd6-4c2c-4432-b312-51019bf0f458" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.812012 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:36 crc kubenswrapper[4823]: E1206 06:27:36.814445 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:37.314422014 +0000 UTC m=+158.600173984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.831060 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" podStartSLOduration=133.831041057 podStartE2EDuration="2m13.831041057s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:36.041786425 +0000 UTC m=+157.327538395" watchObservedRunningTime="2025-12-06 06:27:36.831041057 +0000 UTC m=+158.116793017" Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.847330 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jrnhm"] Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.850822 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrnhm" Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.866261 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.867480 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.876818 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.887074 4823 patch_prober.go:28] interesting pod/console-f9d7485db-wzsch container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.887150 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-wzsch" podUID="e802aa0a-cd13-43df-be69-40b0bca7200f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.902372 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gxh4w"] Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.902920 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.920983 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:36 crc kubenswrapper[4823]: E1206 06:27:36.921337 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:37.421323285 +0000 UTC m=+158.707075245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.986835 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-px8wk"] Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.987880 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.988274 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrnhm"] Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.988353 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.988820 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-px8wk" Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.996282 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 06 06:27:36 crc kubenswrapper[4823]: I1206 06:27:36.996514 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.022280 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.022459 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2786k\" (UniqueName: \"kubernetes.io/projected/ab79175b-ce4b-4ad8-863b-31fe71624804-kube-api-access-2786k\") pod \"redhat-marketplace-jrnhm\" (UID: \"ab79175b-ce4b-4ad8-863b-31fe71624804\") " pod="openshift-marketplace/redhat-marketplace-jrnhm" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.022484 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab79175b-ce4b-4ad8-863b-31fe71624804-catalog-content\") pod \"redhat-marketplace-jrnhm\" (UID: \"ab79175b-ce4b-4ad8-863b-31fe71624804\") " pod="openshift-marketplace/redhat-marketplace-jrnhm" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.022522 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab79175b-ce4b-4ad8-863b-31fe71624804-utilities\") pod \"redhat-marketplace-jrnhm\" (UID: \"ab79175b-ce4b-4ad8-863b-31fe71624804\") " pod="openshift-marketplace/redhat-marketplace-jrnhm" Dec 06 06:27:37 crc kubenswrapper[4823]: E1206 06:27:37.023366 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:37.523347913 +0000 UTC m=+158.809099873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.087515 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-px8wk"] Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.114924 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.141417 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2786k\" (UniqueName: \"kubernetes.io/projected/ab79175b-ce4b-4ad8-863b-31fe71624804-kube-api-access-2786k\") pod \"redhat-marketplace-jrnhm\" (UID: \"ab79175b-ce4b-4ad8-863b-31fe71624804\") " pod="openshift-marketplace/redhat-marketplace-jrnhm" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.141457 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab79175b-ce4b-4ad8-863b-31fe71624804-catalog-content\") pod \"redhat-marketplace-jrnhm\" (UID: \"ab79175b-ce4b-4ad8-863b-31fe71624804\") " pod="openshift-marketplace/redhat-marketplace-jrnhm" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.141490 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab79175b-ce4b-4ad8-863b-31fe71624804-utilities\") pod \"redhat-marketplace-jrnhm\" (UID: \"ab79175b-ce4b-4ad8-863b-31fe71624804\") " pod="openshift-marketplace/redhat-marketplace-jrnhm" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.141524 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c773a617-cbcd-4cc3-8f5c-1390032ef5da-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c773a617-cbcd-4cc3-8f5c-1390032ef5da\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.141545 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6edd27de-5a66-4fbb-ac77-6889ff93d1b4-catalog-content\") pod \"redhat-marketplace-px8wk\" (UID: \"6edd27de-5a66-4fbb-ac77-6889ff93d1b4\") " pod="openshift-marketplace/redhat-marketplace-px8wk" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.141579 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6edd27de-5a66-4fbb-ac77-6889ff93d1b4-utilities\") pod \"redhat-marketplace-px8wk\" (UID: \"6edd27de-5a66-4fbb-ac77-6889ff93d1b4\") " pod="openshift-marketplace/redhat-marketplace-px8wk" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.141601 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c773a617-cbcd-4cc3-8f5c-1390032ef5da-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c773a617-cbcd-4cc3-8f5c-1390032ef5da\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.141627 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2crgs\" (UniqueName: \"kubernetes.io/projected/6edd27de-5a66-4fbb-ac77-6889ff93d1b4-kube-api-access-2crgs\") pod \"redhat-marketplace-px8wk\" (UID: \"6edd27de-5a66-4fbb-ac77-6889ff93d1b4\") " pod="openshift-marketplace/redhat-marketplace-px8wk" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.141702 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:37 crc kubenswrapper[4823]: E1206 06:27:37.141999 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:37.641986464 +0000 UTC m=+158.927738424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.142684 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab79175b-ce4b-4ad8-863b-31fe71624804-catalog-content\") pod \"redhat-marketplace-jrnhm\" (UID: \"ab79175b-ce4b-4ad8-863b-31fe71624804\") " pod="openshift-marketplace/redhat-marketplace-jrnhm" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.142958 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab79175b-ce4b-4ad8-863b-31fe71624804-utilities\") pod \"redhat-marketplace-jrnhm\" (UID: \"ab79175b-ce4b-4ad8-863b-31fe71624804\") " pod="openshift-marketplace/redhat-marketplace-jrnhm" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.294403 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.294615 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6edd27de-5a66-4fbb-ac77-6889ff93d1b4-catalog-content\") pod \"redhat-marketplace-px8wk\" (UID: \"6edd27de-5a66-4fbb-ac77-6889ff93d1b4\") " pod="openshift-marketplace/redhat-marketplace-px8wk" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.294639 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c773a617-cbcd-4cc3-8f5c-1390032ef5da-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c773a617-cbcd-4cc3-8f5c-1390032ef5da\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.294684 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6edd27de-5a66-4fbb-ac77-6889ff93d1b4-utilities\") pod \"redhat-marketplace-px8wk\" (UID: \"6edd27de-5a66-4fbb-ac77-6889ff93d1b4\") " pod="openshift-marketplace/redhat-marketplace-px8wk" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.294700 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c773a617-cbcd-4cc3-8f5c-1390032ef5da-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c773a617-cbcd-4cc3-8f5c-1390032ef5da\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.294718 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2crgs\" (UniqueName: \"kubernetes.io/projected/6edd27de-5a66-4fbb-ac77-6889ff93d1b4-kube-api-access-2crgs\") pod \"redhat-marketplace-px8wk\" (UID: \"6edd27de-5a66-4fbb-ac77-6889ff93d1b4\") " pod="openshift-marketplace/redhat-marketplace-px8wk" Dec 06 06:27:37 crc kubenswrapper[4823]: E1206 06:27:37.295014 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:37.794997751 +0000 UTC m=+159.080749711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.295333 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6edd27de-5a66-4fbb-ac77-6889ff93d1b4-catalog-content\") pod \"redhat-marketplace-px8wk\" (UID: \"6edd27de-5a66-4fbb-ac77-6889ff93d1b4\") " pod="openshift-marketplace/redhat-marketplace-px8wk" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.295372 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c773a617-cbcd-4cc3-8f5c-1390032ef5da-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c773a617-cbcd-4cc3-8f5c-1390032ef5da\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.299277 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6edd27de-5a66-4fbb-ac77-6889ff93d1b4-utilities\") pod \"redhat-marketplace-px8wk\" (UID: \"6edd27de-5a66-4fbb-ac77-6889ff93d1b4\") " pod="openshift-marketplace/redhat-marketplace-px8wk" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.319912 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2786k\" (UniqueName: \"kubernetes.io/projected/ab79175b-ce4b-4ad8-863b-31fe71624804-kube-api-access-2786k\") pod \"redhat-marketplace-jrnhm\" (UID: \"ab79175b-ce4b-4ad8-863b-31fe71624804\") " pod="openshift-marketplace/redhat-marketplace-jrnhm" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.320595 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vjc84"] Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.321648 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.321761 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vjc84" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.336030 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxh4w" event={"ID":"6ade1bd9-4ca5-4910-8989-09b55a67bd0e","Type":"ContainerStarted","Data":"79b45fb3821a7a1a539e57cdbd91b65b24541938b5b426ee59a5e721ef8a9f4c"} Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.344160 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cwrr7" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.347079 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vjc84"] Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.354859 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.362656 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2crgs\" (UniqueName: \"kubernetes.io/projected/6edd27de-5a66-4fbb-ac77-6889ff93d1b4-kube-api-access-2crgs\") pod \"redhat-marketplace-px8wk\" (UID: \"6edd27de-5a66-4fbb-ac77-6889ff93d1b4\") " pod="openshift-marketplace/redhat-marketplace-px8wk" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.371529 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c773a617-cbcd-4cc3-8f5c-1390032ef5da-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c773a617-cbcd-4cc3-8f5c-1390032ef5da\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.394071 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-px8wk" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.398583 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:37 crc kubenswrapper[4823]: E1206 06:27:37.399019 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:37.899004233 +0000 UTC m=+159.184756193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.505765 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.506055 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m6tx\" (UniqueName: \"kubernetes.io/projected/dfb88fb7-5645-4804-a359-800d2b14fabe-kube-api-access-5m6tx\") pod \"redhat-operators-vjc84\" (UID: \"dfb88fb7-5645-4804-a359-800d2b14fabe\") " pod="openshift-marketplace/redhat-operators-vjc84" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.506112 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfb88fb7-5645-4804-a359-800d2b14fabe-catalog-content\") pod \"redhat-operators-vjc84\" (UID: \"dfb88fb7-5645-4804-a359-800d2b14fabe\") " pod="openshift-marketplace/redhat-operators-vjc84" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.506154 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfb88fb7-5645-4804-a359-800d2b14fabe-utilities\") pod \"redhat-operators-vjc84\" (UID: \"dfb88fb7-5645-4804-a359-800d2b14fabe\") " pod="openshift-marketplace/redhat-operators-vjc84" Dec 06 06:27:37 crc kubenswrapper[4823]: E1206 06:27:37.506814 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:38.006800038 +0000 UTC m=+159.292551998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.526997 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrnhm" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.575375 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jgghf"] Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.576473 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jgghf" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.578868 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-4rlt6" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.622380 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:37 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:37 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:37 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.622657 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.623851 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m6tx\" (UniqueName: \"kubernetes.io/projected/dfb88fb7-5645-4804-a359-800d2b14fabe-kube-api-access-5m6tx\") pod \"redhat-operators-vjc84\" (UID: \"dfb88fb7-5645-4804-a359-800d2b14fabe\") " pod="openshift-marketplace/redhat-operators-vjc84" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.623901 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.623930 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfb88fb7-5645-4804-a359-800d2b14fabe-catalog-content\") pod \"redhat-operators-vjc84\" (UID: \"dfb88fb7-5645-4804-a359-800d2b14fabe\") " pod="openshift-marketplace/redhat-operators-vjc84" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.623973 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfb88fb7-5645-4804-a359-800d2b14fabe-utilities\") pod \"redhat-operators-vjc84\" (UID: \"dfb88fb7-5645-4804-a359-800d2b14fabe\") " pod="openshift-marketplace/redhat-operators-vjc84" Dec 06 06:27:37 crc kubenswrapper[4823]: E1206 06:27:37.624814 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:38.124653997 +0000 UTC m=+159.410405957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.625325 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfb88fb7-5645-4804-a359-800d2b14fabe-catalog-content\") pod \"redhat-operators-vjc84\" (UID: \"dfb88fb7-5645-4804-a359-800d2b14fabe\") " pod="openshift-marketplace/redhat-operators-vjc84" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.640954 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.705948 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jgghf"] Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.707395 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfb88fb7-5645-4804-a359-800d2b14fabe-utilities\") pod \"redhat-operators-vjc84\" (UID: \"dfb88fb7-5645-4804-a359-800d2b14fabe\") " pod="openshift-marketplace/redhat-operators-vjc84" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.726268 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.726569 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/125800d8-7679-4574-8992-181928f47efc-catalog-content\") pod \"redhat-operators-jgghf\" (UID: \"125800d8-7679-4574-8992-181928f47efc\") " pod="openshift-marketplace/redhat-operators-jgghf" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.726715 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/125800d8-7679-4574-8992-181928f47efc-utilities\") pod \"redhat-operators-jgghf\" (UID: \"125800d8-7679-4574-8992-181928f47efc\") " pod="openshift-marketplace/redhat-operators-jgghf" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.726733 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb99t\" (UniqueName: \"kubernetes.io/projected/125800d8-7679-4574-8992-181928f47efc-kube-api-access-qb99t\") pod \"redhat-operators-jgghf\" (UID: \"125800d8-7679-4574-8992-181928f47efc\") " pod="openshift-marketplace/redhat-operators-jgghf" Dec 06 06:27:37 crc kubenswrapper[4823]: E1206 06:27:37.727655 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:38.227638862 +0000 UTC m=+159.513390822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.795134 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.796186 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.799750 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m6tx\" (UniqueName: \"kubernetes.io/projected/dfb88fb7-5645-4804-a359-800d2b14fabe-kube-api-access-5m6tx\") pod \"redhat-operators-vjc84\" (UID: \"dfb88fb7-5645-4804-a359-800d2b14fabe\") " pod="openshift-marketplace/redhat-operators-vjc84" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.828379 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/125800d8-7679-4574-8992-181928f47efc-catalog-content\") pod \"redhat-operators-jgghf\" (UID: \"125800d8-7679-4574-8992-181928f47efc\") " pod="openshift-marketplace/redhat-operators-jgghf" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.828518 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.828542 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/125800d8-7679-4574-8992-181928f47efc-utilities\") pod \"redhat-operators-jgghf\" (UID: \"125800d8-7679-4574-8992-181928f47efc\") " pod="openshift-marketplace/redhat-operators-jgghf" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.828575 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb99t\" (UniqueName: \"kubernetes.io/projected/125800d8-7679-4574-8992-181928f47efc-kube-api-access-qb99t\") pod \"redhat-operators-jgghf\" (UID: \"125800d8-7679-4574-8992-181928f47efc\") " pod="openshift-marketplace/redhat-operators-jgghf" Dec 06 06:27:37 crc kubenswrapper[4823]: E1206 06:27:37.863283 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:38.363259324 +0000 UTC m=+159.649011284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.872147 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/125800d8-7679-4574-8992-181928f47efc-catalog-content\") pod \"redhat-operators-jgghf\" (UID: \"125800d8-7679-4574-8992-181928f47efc\") " pod="openshift-marketplace/redhat-operators-jgghf" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.881962 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/125800d8-7679-4574-8992-181928f47efc-utilities\") pod \"redhat-operators-jgghf\" (UID: \"125800d8-7679-4574-8992-181928f47efc\") " pod="openshift-marketplace/redhat-operators-jgghf" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.927205 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mghr2" Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.929424 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:37 crc kubenswrapper[4823]: E1206 06:27:37.930839 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:38.430816174 +0000 UTC m=+159.716568154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:37 crc kubenswrapper[4823]: I1206 06:27:37.950445 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb99t\" (UniqueName: \"kubernetes.io/projected/125800d8-7679-4574-8992-181928f47efc-kube-api-access-qb99t\") pod \"redhat-operators-jgghf\" (UID: \"125800d8-7679-4574-8992-181928f47efc\") " pod="openshift-marketplace/redhat-operators-jgghf" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.000968 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9bc9c" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.001031 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2cjj5" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.001530 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-czmn9" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.023952 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vjc84" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.034544 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:38 crc kubenswrapper[4823]: E1206 06:27:38.035257 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:38.535240018 +0000 UTC m=+159.820991978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.092542 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 06 06:27:38 crc kubenswrapper[4823]: E1206 06:27:38.092855 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a0336c-e3d8-4290-a0bb-62d7a7f357ba" containerName="collect-profiles" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.092870 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a0336c-e3d8-4290-a0bb-62d7a7f357ba" containerName="collect-profiles" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.092992 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="95a0336c-e3d8-4290-a0bb-62d7a7f357ba" containerName="collect-profiles" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.093429 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.103082 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.116080 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.138766 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.138822 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw6vq\" (UniqueName: \"kubernetes.io/projected/95a0336c-e3d8-4290-a0bb-62d7a7f357ba-kube-api-access-vw6vq\") pod \"95a0336c-e3d8-4290-a0bb-62d7a7f357ba\" (UID: \"95a0336c-e3d8-4290-a0bb-62d7a7f357ba\") " Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.138869 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95a0336c-e3d8-4290-a0bb-62d7a7f357ba-secret-volume\") pod \"95a0336c-e3d8-4290-a0bb-62d7a7f357ba\" (UID: \"95a0336c-e3d8-4290-a0bb-62d7a7f357ba\") " Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.138943 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95a0336c-e3d8-4290-a0bb-62d7a7f357ba-config-volume\") pod \"95a0336c-e3d8-4290-a0bb-62d7a7f357ba\" (UID: \"95a0336c-e3d8-4290-a0bb-62d7a7f357ba\") " Dec 06 06:27:38 crc kubenswrapper[4823]: E1206 06:27:38.139982 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:38.639965508 +0000 UTC m=+159.925717468 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.164123 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95a0336c-e3d8-4290-a0bb-62d7a7f357ba-config-volume" (OuterVolumeSpecName: "config-volume") pod "95a0336c-e3d8-4290-a0bb-62d7a7f357ba" (UID: "95a0336c-e3d8-4290-a0bb-62d7a7f357ba"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.164654 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.165878 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95a0336c-e3d8-4290-a0bb-62d7a7f357ba-kube-api-access-vw6vq" (OuterVolumeSpecName: "kube-api-access-vw6vq") pod "95a0336c-e3d8-4290-a0bb-62d7a7f357ba" (UID: "95a0336c-e3d8-4290-a0bb-62d7a7f357ba"). InnerVolumeSpecName "kube-api-access-vw6vq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.171876 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2sgg7"] Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.202685 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jgghf" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.210954 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a0336c-e3d8-4290-a0bb-62d7a7f357ba-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "95a0336c-e3d8-4290-a0bb-62d7a7f357ba" (UID: "95a0336c-e3d8-4290-a0bb-62d7a7f357ba"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.241482 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.241539 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de1b835c-94db-42b1-b073-2cde9a83979b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"de1b835c-94db-42b1-b073-2cde9a83979b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.241565 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de1b835c-94db-42b1-b073-2cde9a83979b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"de1b835c-94db-42b1-b073-2cde9a83979b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.241597 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95a0336c-e3d8-4290-a0bb-62d7a7f357ba-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.241607 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw6vq\" (UniqueName: \"kubernetes.io/projected/95a0336c-e3d8-4290-a0bb-62d7a7f357ba-kube-api-access-vw6vq\") on node \"crc\" DevicePath \"\"" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.241619 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95a0336c-e3d8-4290-a0bb-62d7a7f357ba-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 06:27:38 crc kubenswrapper[4823]: E1206 06:27:38.241907 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:38.741893874 +0000 UTC m=+160.027645834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.342199 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.342456 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de1b835c-94db-42b1-b073-2cde9a83979b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"de1b835c-94db-42b1-b073-2cde9a83979b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.342489 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de1b835c-94db-42b1-b073-2cde9a83979b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"de1b835c-94db-42b1-b073-2cde9a83979b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 06:27:38 crc kubenswrapper[4823]: E1206 06:27:38.342941 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:38.842924875 +0000 UTC m=+160.128676845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.342986 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de1b835c-94db-42b1-b073-2cde9a83979b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"de1b835c-94db-42b1-b073-2cde9a83979b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.487456 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:38 crc kubenswrapper[4823]: E1206 06:27:38.487962 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:38.987947284 +0000 UTC m=+160.273699244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.542170 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-24zhr" event={"ID":"27d936b5-b671-4d17-b9ee-bff849246c5a","Type":"ContainerStarted","Data":"5cf53e3121d16ff0fa1a4feaaffd7e25defa305437010b7abdb564cc9d9ce0a8"} Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.546047 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-czmn9" event={"ID":"95a0336c-e3d8-4290-a0bb-62d7a7f357ba","Type":"ContainerDied","Data":"74516f7e8a6d3477f811c9a91127987b53b392432fdca1c2a235d7e745f9b2da"} Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.546088 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74516f7e8a6d3477f811c9a91127987b53b392432fdca1c2a235d7e745f9b2da" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.546145 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-czmn9" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.589619 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de1b835c-94db-42b1-b073-2cde9a83979b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"de1b835c-94db-42b1-b073-2cde9a83979b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.589857 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:38 crc kubenswrapper[4823]: E1206 06:27:38.590245 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:39.090230699 +0000 UTC m=+160.375982659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.621353 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xx2np"] Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.621994 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sgg7" event={"ID":"130f260b-b329-499b-a6ff-b15b96d8bf7d","Type":"ContainerStarted","Data":"83f9c51ddd79895c7f90b0b24a4a64d715187500e987e7eaf8fba0d48f9acc18"} Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.692702 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:38 crc kubenswrapper[4823]: E1206 06:27:38.693323 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:39.193309166 +0000 UTC m=+160.479061126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.694903 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:38 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:38 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:38 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.694964 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.788316 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-96764" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.794020 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:38 crc kubenswrapper[4823]: E1206 06:27:38.794989 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:39.294973125 +0000 UTC m=+160.580725085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.813616 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.872704 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fqbpf"] Dec 06 06:27:38 crc kubenswrapper[4823]: I1206 06:27:38.900033 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:38 crc kubenswrapper[4823]: E1206 06:27:38.901188 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:39.401175727 +0000 UTC m=+160.686927687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:39 crc kubenswrapper[4823]: I1206 06:27:39.004308 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:39 crc kubenswrapper[4823]: E1206 06:27:39.004906 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:39.504886071 +0000 UTC m=+160.790638031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:39 crc kubenswrapper[4823]: I1206 06:27:39.105528 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:39 crc kubenswrapper[4823]: E1206 06:27:39.105860 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:39.60584716 +0000 UTC m=+160.891599120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:39 crc kubenswrapper[4823]: I1206 06:27:39.210411 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:39 crc kubenswrapper[4823]: E1206 06:27:39.211159 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:39.711140207 +0000 UTC m=+160.996892167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:39 crc kubenswrapper[4823]: I1206 06:27:39.316057 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:39 crc kubenswrapper[4823]: E1206 06:27:39.319185 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:39.819160799 +0000 UTC m=+161.104912759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:39 crc kubenswrapper[4823]: I1206 06:27:39.380163 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-px8wk"] Dec 06 06:27:39 crc kubenswrapper[4823]: I1206 06:27:39.417915 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:39 crc kubenswrapper[4823]: E1206 06:27:39.418373 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:39.91835612 +0000 UTC m=+161.204108080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:39 crc kubenswrapper[4823]: I1206 06:27:39.519359 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:39 crc kubenswrapper[4823]: E1206 06:27:39.519771 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:40.019756841 +0000 UTC m=+161.305508801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:39 crc kubenswrapper[4823]: I1206 06:27:39.592905 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:39 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:39 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:39 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:39 crc kubenswrapper[4823]: I1206 06:27:39.593007 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:39 crc kubenswrapper[4823]: I1206 06:27:39.620463 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:39 crc kubenswrapper[4823]: E1206 06:27:39.620766 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:40.120741211 +0000 UTC m=+161.406493171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:39 crc kubenswrapper[4823]: I1206 06:27:39.671863 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrnhm"] Dec 06 06:27:39 crc kubenswrapper[4823]: I1206 06:27:39.698398 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xx2np" event={"ID":"c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1","Type":"ContainerStarted","Data":"22083de0bbc67447e54910ff361ef6043ffb2d2100349ed45158633dc4ece786"} Dec 06 06:27:39 crc kubenswrapper[4823]: I1206 06:27:39.703339 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 06 06:27:39 crc kubenswrapper[4823]: I1206 06:27:39.712997 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqbpf" event={"ID":"924b1003-afd5-49e2-883d-12b314c93876","Type":"ContainerStarted","Data":"a2218be1f3b868e78c5fee81e6a97c73bffc9d0861a1180bf17f4e49c9c15008"} Dec 06 06:27:39 crc kubenswrapper[4823]: I1206 06:27:39.723158 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:39 crc kubenswrapper[4823]: E1206 06:27:39.723532 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:40.22351715 +0000 UTC m=+161.509269110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:39 crc kubenswrapper[4823]: I1206 06:27:39.738383 4823 generic.go:334] "Generic (PLEG): container finished" podID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" containerID="af21c8e9a3740a3bc0bb6e6a2068b3db5d70bd3b4fcab1ca606660efd88c86a0" exitCode=0 Dec 06 06:27:39 crc kubenswrapper[4823]: I1206 06:27:39.738440 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxh4w" event={"ID":"6ade1bd9-4ca5-4910-8989-09b55a67bd0e","Type":"ContainerDied","Data":"af21c8e9a3740a3bc0bb6e6a2068b3db5d70bd3b4fcab1ca606660efd88c86a0"} Dec 06 06:27:39 crc kubenswrapper[4823]: I1206 06:27:39.742315 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 06:27:39 crc kubenswrapper[4823]: I1206 06:27:39.767994 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-px8wk" event={"ID":"6edd27de-5a66-4fbb-ac77-6889ff93d1b4","Type":"ContainerStarted","Data":"d1b2bbee2dc63d56575b10d65db4fddf8a5a515620b638c93afbf555de310075"} Dec 06 06:27:39 crc kubenswrapper[4823]: I1206 06:27:39.791818 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vjc84"] Dec 06 06:27:39 crc kubenswrapper[4823]: W1206 06:27:39.812859 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfb88fb7_5645_4804_a359_800d2b14fabe.slice/crio-71bf6c9afbf478b0aa5ffda6fb16e0fa258be059656478907cf4692e1e5b2a1d WatchSource:0}: Error finding container 71bf6c9afbf478b0aa5ffda6fb16e0fa258be059656478907cf4692e1e5b2a1d: Status 404 returned error can't find the container with id 71bf6c9afbf478b0aa5ffda6fb16e0fa258be059656478907cf4692e1e5b2a1d Dec 06 06:27:39 crc kubenswrapper[4823]: I1206 06:27:39.826207 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:39 crc kubenswrapper[4823]: E1206 06:27:39.827740 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:40.327719997 +0000 UTC m=+161.613471957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:39 crc kubenswrapper[4823]: I1206 06:27:39.848531 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jgghf"] Dec 06 06:27:39 crc kubenswrapper[4823]: I1206 06:27:39.932458 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:39 crc kubenswrapper[4823]: E1206 06:27:39.932916 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:40.432898151 +0000 UTC m=+161.718650101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.017618 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.034410 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:40 crc kubenswrapper[4823]: E1206 06:27:40.034983 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:40.53496046 +0000 UTC m=+161.820712420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.056502 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-nbvlv" Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.059252 4823 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.139819 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:40 crc kubenswrapper[4823]: E1206 06:27:40.140829 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:40.640807143 +0000 UTC m=+161.926559113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.240907 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:40 crc kubenswrapper[4823]: E1206 06:27:40.241429 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 06:27:40.741399732 +0000 UTC m=+162.027151702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.342538 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:40 crc kubenswrapper[4823]: E1206 06:27:40.342947 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 06:27:40.842931777 +0000 UTC m=+162.128683737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rb79w" (UID: "3f369975-6444-44d7-b85f-290ec604b172") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 06:27:40 crc kubenswrapper[4823]: E1206 06:27:40.353576 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5a8777c_f9f5_4a33_9dc8_93b93edc6fa1.slice/crio-2dc0d68e6214e5b282fa5cde765c9b3b8cf53b447e0c185cca0995e57ecf49dd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab79175b_ce4b_4ad8_863b_31fe71624804.slice/crio-05d839d7047ae75b274457e02d22cbe1c6622642f640e5d604c204a86d86401a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab79175b_ce4b_4ad8_863b_31fe71624804.slice/crio-conmon-05d839d7047ae75b274457e02d22cbe1c6622642f640e5d604c204a86d86401a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6edd27de_5a66_4fbb_ac77_6889ff93d1b4.slice/crio-conmon-1ff12bfcc8a37fffdb53d9bb32af8f9d7d285514786232d6d04a6e9679ed2b44.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod130f260b_b329_499b_a6ff_b15b96d8bf7d.slice/crio-3d88a3196f43ac4857a976954211b27ebbab989f5b853f461e90a0e79a9e2f90.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6edd27de_5a66_4fbb_ac77_6889ff93d1b4.slice/crio-1ff12bfcc8a37fffdb53d9bb32af8f9d7d285514786232d6d04a6e9679ed2b44.scope\": RecentStats: unable to find data in memory cache]" Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.378751 4823 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-06T06:27:40.059271872Z","Handler":null,"Name":""} Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.386322 4823 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.386374 4823 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.443689 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.447975 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.547145 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.563271 4823 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.563330 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.581011 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:40 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:40 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:40 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.581073 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.646419 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rb79w\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.730494 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.816924 4823 generic.go:334] "Generic (PLEG): container finished" podID="ab79175b-ce4b-4ad8-863b-31fe71624804" containerID="05d839d7047ae75b274457e02d22cbe1c6622642f640e5d604c204a86d86401a" exitCode=0 Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.816998 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrnhm" event={"ID":"ab79175b-ce4b-4ad8-863b-31fe71624804","Type":"ContainerDied","Data":"05d839d7047ae75b274457e02d22cbe1c6622642f640e5d604c204a86d86401a"} Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.817029 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrnhm" event={"ID":"ab79175b-ce4b-4ad8-863b-31fe71624804","Type":"ContainerStarted","Data":"2662b60d3f057dcb3a871a2f0db47aba9f9c78401d34cfd3e80cf12af825bb00"} Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.820916 4823 generic.go:334] "Generic (PLEG): container finished" podID="130f260b-b329-499b-a6ff-b15b96d8bf7d" containerID="3d88a3196f43ac4857a976954211b27ebbab989f5b853f461e90a0e79a9e2f90" exitCode=0 Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.820983 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sgg7" event={"ID":"130f260b-b329-499b-a6ff-b15b96d8bf7d","Type":"ContainerDied","Data":"3d88a3196f43ac4857a976954211b27ebbab989f5b853f461e90a0e79a9e2f90"} Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.825877 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c773a617-cbcd-4cc3-8f5c-1390032ef5da","Type":"ContainerStarted","Data":"a01e8e4bcbc924c2bf41d1e7ffcdd5c787be58b7ba587491ffe749d0ffbd6b8d"} Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.833285 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vjc84" event={"ID":"dfb88fb7-5645-4804-a359-800d2b14fabe","Type":"ContainerStarted","Data":"71bf6c9afbf478b0aa5ffda6fb16e0fa258be059656478907cf4692e1e5b2a1d"} Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.835340 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgghf" event={"ID":"125800d8-7679-4574-8992-181928f47efc","Type":"ContainerStarted","Data":"a4fe80ac9b650217f9f02ca316561cecf39d9e72c2f09f770d47d4c6eaf91f2d"} Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.837893 4823 generic.go:334] "Generic (PLEG): container finished" podID="6edd27de-5a66-4fbb-ac77-6889ff93d1b4" containerID="1ff12bfcc8a37fffdb53d9bb32af8f9d7d285514786232d6d04a6e9679ed2b44" exitCode=0 Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.837958 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-px8wk" event={"ID":"6edd27de-5a66-4fbb-ac77-6889ff93d1b4","Type":"ContainerDied","Data":"1ff12bfcc8a37fffdb53d9bb32af8f9d7d285514786232d6d04a6e9679ed2b44"} Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.860445 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"de1b835c-94db-42b1-b073-2cde9a83979b","Type":"ContainerStarted","Data":"7b45655a864f234cac4e36665119390defabf5278bf3ebf6619432f308b98d5a"} Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.883435 4823 generic.go:334] "Generic (PLEG): container finished" podID="924b1003-afd5-49e2-883d-12b314c93876" containerID="4a0b529b84dbff6f27c4ac57b2243760759c470d9cec14e54af716a1a9a52c22" exitCode=0 Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.883571 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqbpf" event={"ID":"924b1003-afd5-49e2-883d-12b314c93876","Type":"ContainerDied","Data":"4a0b529b84dbff6f27c4ac57b2243760759c470d9cec14e54af716a1a9a52c22"} Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.888999 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-24zhr" event={"ID":"27d936b5-b671-4d17-b9ee-bff849246c5a","Type":"ContainerStarted","Data":"6c1731e65883a668d811c3f973afd240d6aa6c01052bef64e91e16e84caa24e4"} Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.890955 4823 generic.go:334] "Generic (PLEG): container finished" podID="c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" containerID="2dc0d68e6214e5b282fa5cde765c9b3b8cf53b447e0c185cca0995e57ecf49dd" exitCode=0 Dec 06 06:27:40 crc kubenswrapper[4823]: I1206 06:27:40.890985 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xx2np" event={"ID":"c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1","Type":"ContainerDied","Data":"2dc0d68e6214e5b282fa5cde765c9b3b8cf53b447e0c185cca0995e57ecf49dd"} Dec 06 06:27:41 crc kubenswrapper[4823]: I1206 06:27:41.162365 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 06 06:27:41 crc kubenswrapper[4823]: I1206 06:27:41.296833 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-24zhr" podStartSLOduration=17.296808311 podStartE2EDuration="17.296808311s" podCreationTimestamp="2025-12-06 06:27:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:40.958154579 +0000 UTC m=+162.243906559" watchObservedRunningTime="2025-12-06 06:27:41.296808311 +0000 UTC m=+162.582560271" Dec 06 06:27:41 crc kubenswrapper[4823]: I1206 06:27:41.299435 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rb79w"] Dec 06 06:27:41 crc kubenswrapper[4823]: I1206 06:27:41.514910 4823 patch_prober.go:28] interesting pod/apiserver-76f77b778f-fwj5n container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 06 06:27:41 crc kubenswrapper[4823]: [+]log ok Dec 06 06:27:41 crc kubenswrapper[4823]: [+]etcd ok Dec 06 06:27:41 crc kubenswrapper[4823]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 06 06:27:41 crc kubenswrapper[4823]: [+]poststarthook/generic-apiserver-start-informers ok Dec 06 06:27:41 crc kubenswrapper[4823]: [+]poststarthook/max-in-flight-filter ok Dec 06 06:27:41 crc kubenswrapper[4823]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 06 06:27:41 crc kubenswrapper[4823]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 06 06:27:41 crc kubenswrapper[4823]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 06 06:27:41 crc kubenswrapper[4823]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 06 06:27:41 crc kubenswrapper[4823]: [+]poststarthook/project.openshift.io-projectcache ok Dec 06 06:27:41 crc kubenswrapper[4823]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 06 06:27:41 crc kubenswrapper[4823]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Dec 06 06:27:41 crc kubenswrapper[4823]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 06 06:27:41 crc kubenswrapper[4823]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 06 06:27:41 crc kubenswrapper[4823]: livez check failed Dec 06 06:27:41 crc kubenswrapper[4823]: I1206 06:27:41.515249 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" podUID="3a0d23be-9267-406c-a67e-6970f6e8b922" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:41 crc kubenswrapper[4823]: I1206 06:27:41.584140 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:41 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:41 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:41 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:41 crc kubenswrapper[4823]: I1206 06:27:41.584202 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:41 crc kubenswrapper[4823]: I1206 06:27:41.979492 4823 generic.go:334] "Generic (PLEG): container finished" podID="dfb88fb7-5645-4804-a359-800d2b14fabe" containerID="02c706c6536e936b43edea8497dbd5e4b71834259696710a1df91b974a583cd9" exitCode=0 Dec 06 06:27:41 crc kubenswrapper[4823]: I1206 06:27:41.979548 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vjc84" event={"ID":"dfb88fb7-5645-4804-a359-800d2b14fabe","Type":"ContainerDied","Data":"02c706c6536e936b43edea8497dbd5e4b71834259696710a1df91b974a583cd9"} Dec 06 06:27:41 crc kubenswrapper[4823]: I1206 06:27:41.983033 4823 generic.go:334] "Generic (PLEG): container finished" podID="125800d8-7679-4574-8992-181928f47efc" containerID="f512251dddca7bd986cfe6e2ce15a71fb43cb729e7382def0360b71ba37316e1" exitCode=0 Dec 06 06:27:41 crc kubenswrapper[4823]: I1206 06:27:41.983096 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgghf" event={"ID":"125800d8-7679-4574-8992-181928f47efc","Type":"ContainerDied","Data":"f512251dddca7bd986cfe6e2ce15a71fb43cb729e7382def0360b71ba37316e1"} Dec 06 06:27:41 crc kubenswrapper[4823]: I1206 06:27:41.985596 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" event={"ID":"3f369975-6444-44d7-b85f-290ec604b172","Type":"ContainerStarted","Data":"7d415b5dc1af211d8c3cc10228141e39374542e4eb39178f3cb67106aa09b928"} Dec 06 06:27:41 crc kubenswrapper[4823]: I1206 06:27:41.985621 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" event={"ID":"3f369975-6444-44d7-b85f-290ec604b172","Type":"ContainerStarted","Data":"78d8ade9f0610a386aafb1d9e8f8c70266a3e0c37a1f96824c29e2e367e040e8"} Dec 06 06:27:41 crc kubenswrapper[4823]: I1206 06:27:41.985987 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:27:41 crc kubenswrapper[4823]: I1206 06:27:41.991174 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"de1b835c-94db-42b1-b073-2cde9a83979b","Type":"ContainerStarted","Data":"d47928bd88cc76c6158c97678e9f8017bfc12b18a2deb83108b3ff0a1f10c9e4"} Dec 06 06:27:41 crc kubenswrapper[4823]: I1206 06:27:41.993900 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c773a617-cbcd-4cc3-8f5c-1390032ef5da","Type":"ContainerStarted","Data":"d1da4a4010cea72c4aeaedfbbb7f9abbb221527d965abc41903cd0fd1d4204af"} Dec 06 06:27:42 crc kubenswrapper[4823]: I1206 06:27:42.112832 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.112811271 podStartE2EDuration="4.112811271s" podCreationTimestamp="2025-12-06 06:27:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:42.111403642 +0000 UTC m=+163.397155602" watchObservedRunningTime="2025-12-06 06:27:42.112811271 +0000 UTC m=+163.398563231" Dec 06 06:27:42 crc kubenswrapper[4823]: I1206 06:27:42.137265 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" podStartSLOduration=139.137245796 podStartE2EDuration="2m19.137245796s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:42.135063786 +0000 UTC m=+163.420815756" watchObservedRunningTime="2025-12-06 06:27:42.137245796 +0000 UTC m=+163.422997756" Dec 06 06:27:42 crc kubenswrapper[4823]: I1206 06:27:42.150591 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=6.150573599 podStartE2EDuration="6.150573599s" podCreationTimestamp="2025-12-06 06:27:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:42.149847049 +0000 UTC m=+163.435599019" watchObservedRunningTime="2025-12-06 06:27:42.150573599 +0000 UTC m=+163.436325559" Dec 06 06:27:42 crc kubenswrapper[4823]: I1206 06:27:42.579152 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:42 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:42 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:42 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:42 crc kubenswrapper[4823]: I1206 06:27:42.579439 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:42 crc kubenswrapper[4823]: I1206 06:27:42.830695 4823 patch_prober.go:28] interesting pod/apiserver-76f77b778f-fwj5n container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 06 06:27:42 crc kubenswrapper[4823]: [+]log ok Dec 06 06:27:42 crc kubenswrapper[4823]: [+]etcd ok Dec 06 06:27:42 crc kubenswrapper[4823]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 06 06:27:42 crc kubenswrapper[4823]: [+]poststarthook/generic-apiserver-start-informers ok Dec 06 06:27:42 crc kubenswrapper[4823]: [+]poststarthook/max-in-flight-filter ok Dec 06 06:27:42 crc kubenswrapper[4823]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 06 06:27:42 crc kubenswrapper[4823]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 06 06:27:42 crc kubenswrapper[4823]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 06 06:27:42 crc kubenswrapper[4823]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Dec 06 06:27:42 crc kubenswrapper[4823]: [+]poststarthook/project.openshift.io-projectcache ok Dec 06 06:27:42 crc kubenswrapper[4823]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 06 06:27:42 crc kubenswrapper[4823]: [+]poststarthook/openshift.io-startinformers ok Dec 06 06:27:42 crc kubenswrapper[4823]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 06 06:27:42 crc kubenswrapper[4823]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 06 06:27:42 crc kubenswrapper[4823]: livez check failed Dec 06 06:27:42 crc kubenswrapper[4823]: I1206 06:27:42.830745 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" podUID="3a0d23be-9267-406c-a67e-6970f6e8b922" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:43 crc kubenswrapper[4823]: I1206 06:27:43.000996 4823 generic.go:334] "Generic (PLEG): container finished" podID="c773a617-cbcd-4cc3-8f5c-1390032ef5da" containerID="d1da4a4010cea72c4aeaedfbbb7f9abbb221527d965abc41903cd0fd1d4204af" exitCode=0 Dec 06 06:27:43 crc kubenswrapper[4823]: I1206 06:27:43.001088 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c773a617-cbcd-4cc3-8f5c-1390032ef5da","Type":"ContainerDied","Data":"d1da4a4010cea72c4aeaedfbbb7f9abbb221527d965abc41903cd0fd1d4204af"} Dec 06 06:27:43 crc kubenswrapper[4823]: I1206 06:27:43.006888 4823 generic.go:334] "Generic (PLEG): container finished" podID="de1b835c-94db-42b1-b073-2cde9a83979b" containerID="d47928bd88cc76c6158c97678e9f8017bfc12b18a2deb83108b3ff0a1f10c9e4" exitCode=0 Dec 06 06:27:43 crc kubenswrapper[4823]: I1206 06:27:43.007566 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"de1b835c-94db-42b1-b073-2cde9a83979b","Type":"ContainerDied","Data":"d47928bd88cc76c6158c97678e9f8017bfc12b18a2deb83108b3ff0a1f10c9e4"} Dec 06 06:27:43 crc kubenswrapper[4823]: I1206 06:27:43.580059 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:43 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:43 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:43 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:43 crc kubenswrapper[4823]: I1206 06:27:43.580128 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:44 crc kubenswrapper[4823]: I1206 06:27:44.608287 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:44 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:44 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:44 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:44 crc kubenswrapper[4823]: I1206 06:27:44.608367 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:45 crc kubenswrapper[4823]: I1206 06:27:45.234112 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 06:27:45 crc kubenswrapper[4823]: I1206 06:27:45.569288 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 06:27:45 crc kubenswrapper[4823]: I1206 06:27:45.599050 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:45 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:45 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:45 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:45 crc kubenswrapper[4823]: I1206 06:27:45.599107 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:45 crc kubenswrapper[4823]: I1206 06:27:45.644465 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c773a617-cbcd-4cc3-8f5c-1390032ef5da-kube-api-access\") pod \"c773a617-cbcd-4cc3-8f5c-1390032ef5da\" (UID: \"c773a617-cbcd-4cc3-8f5c-1390032ef5da\") " Dec 06 06:27:45 crc kubenswrapper[4823]: I1206 06:27:45.644554 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c773a617-cbcd-4cc3-8f5c-1390032ef5da-kubelet-dir\") pod \"c773a617-cbcd-4cc3-8f5c-1390032ef5da\" (UID: \"c773a617-cbcd-4cc3-8f5c-1390032ef5da\") " Dec 06 06:27:45 crc kubenswrapper[4823]: I1206 06:27:45.644575 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de1b835c-94db-42b1-b073-2cde9a83979b-kubelet-dir\") pod \"de1b835c-94db-42b1-b073-2cde9a83979b\" (UID: \"de1b835c-94db-42b1-b073-2cde9a83979b\") " Dec 06 06:27:45 crc kubenswrapper[4823]: I1206 06:27:45.644634 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de1b835c-94db-42b1-b073-2cde9a83979b-kube-api-access\") pod \"de1b835c-94db-42b1-b073-2cde9a83979b\" (UID: \"de1b835c-94db-42b1-b073-2cde9a83979b\") " Dec 06 06:27:45 crc kubenswrapper[4823]: I1206 06:27:45.644728 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c773a617-cbcd-4cc3-8f5c-1390032ef5da-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c773a617-cbcd-4cc3-8f5c-1390032ef5da" (UID: "c773a617-cbcd-4cc3-8f5c-1390032ef5da"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:27:45 crc kubenswrapper[4823]: I1206 06:27:45.645038 4823 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c773a617-cbcd-4cc3-8f5c-1390032ef5da-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 06 06:27:45 crc kubenswrapper[4823]: I1206 06:27:45.645866 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de1b835c-94db-42b1-b073-2cde9a83979b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "de1b835c-94db-42b1-b073-2cde9a83979b" (UID: "de1b835c-94db-42b1-b073-2cde9a83979b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:27:45 crc kubenswrapper[4823]: I1206 06:27:45.667036 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de1b835c-94db-42b1-b073-2cde9a83979b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "de1b835c-94db-42b1-b073-2cde9a83979b" (UID: "de1b835c-94db-42b1-b073-2cde9a83979b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:27:45 crc kubenswrapper[4823]: I1206 06:27:45.668283 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c773a617-cbcd-4cc3-8f5c-1390032ef5da-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c773a617-cbcd-4cc3-8f5c-1390032ef5da" (UID: "c773a617-cbcd-4cc3-8f5c-1390032ef5da"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:27:45 crc kubenswrapper[4823]: I1206 06:27:45.749478 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs\") pod \"network-metrics-daemon-57k6t\" (UID: \"5a2bb8a5-743e-42ed-9f30-850690a30e47\") " pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:27:45 crc kubenswrapper[4823]: I1206 06:27:45.749820 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c773a617-cbcd-4cc3-8f5c-1390032ef5da-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 06:27:45 crc kubenswrapper[4823]: I1206 06:27:45.749844 4823 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de1b835c-94db-42b1-b073-2cde9a83979b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 06 06:27:45 crc kubenswrapper[4823]: I1206 06:27:45.749857 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de1b835c-94db-42b1-b073-2cde9a83979b-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 06:27:45 crc kubenswrapper[4823]: I1206 06:27:45.753204 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a2bb8a5-743e-42ed-9f30-850690a30e47-metrics-certs\") pod \"network-metrics-daemon-57k6t\" (UID: \"5a2bb8a5-743e-42ed-9f30-850690a30e47\") " pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:27:45 crc kubenswrapper[4823]: I1206 06:27:45.988556 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57k6t" Dec 06 06:27:46 crc kubenswrapper[4823]: I1206 06:27:46.382286 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c773a617-cbcd-4cc3-8f5c-1390032ef5da","Type":"ContainerDied","Data":"a01e8e4bcbc924c2bf41d1e7ffcdd5c787be58b7ba587491ffe749d0ffbd6b8d"} Dec 06 06:27:46 crc kubenswrapper[4823]: I1206 06:27:46.382344 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a01e8e4bcbc924c2bf41d1e7ffcdd5c787be58b7ba587491ffe749d0ffbd6b8d" Dec 06 06:27:46 crc kubenswrapper[4823]: I1206 06:27:46.384279 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"de1b835c-94db-42b1-b073-2cde9a83979b","Type":"ContainerDied","Data":"7b45655a864f234cac4e36665119390defabf5278bf3ebf6619432f308b98d5a"} Dec 06 06:27:46 crc kubenswrapper[4823]: I1206 06:27:46.384333 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b45655a864f234cac4e36665119390defabf5278bf3ebf6619432f308b98d5a" Dec 06 06:27:46 crc kubenswrapper[4823]: I1206 06:27:46.384421 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 06:27:46 crc kubenswrapper[4823]: I1206 06:27:46.384994 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 06:27:46 crc kubenswrapper[4823]: I1206 06:27:46.583188 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:46 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:46 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:46 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:46 crc kubenswrapper[4823]: I1206 06:27:46.583262 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:46 crc kubenswrapper[4823]: I1206 06:27:46.658581 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-9g789 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Dec 06 06:27:46 crc kubenswrapper[4823]: I1206 06:27:46.658648 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9g789" podUID="ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Dec 06 06:27:46 crc kubenswrapper[4823]: I1206 06:27:46.696608 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-9g789 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Dec 06 06:27:46 crc kubenswrapper[4823]: I1206 06:27:46.696688 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9g789" podUID="ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Dec 06 06:27:46 crc kubenswrapper[4823]: I1206 06:27:46.859088 4823 patch_prober.go:28] interesting pod/console-f9d7485db-wzsch container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 06 06:27:46 crc kubenswrapper[4823]: I1206 06:27:46.859172 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-wzsch" podUID="e802aa0a-cd13-43df-be69-40b0bca7200f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 06 06:27:46 crc kubenswrapper[4823]: I1206 06:27:46.966282 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-57k6t"] Dec 06 06:27:46 crc kubenswrapper[4823]: W1206 06:27:46.979236 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a2bb8a5_743e_42ed_9f30_850690a30e47.slice/crio-50ac88c7be676e5c718c3c537c7542e1f3ff87fc527ae075c38c9a161142cf57 WatchSource:0}: Error finding container 50ac88c7be676e5c718c3c537c7542e1f3ff87fc527ae075c38c9a161142cf57: Status 404 returned error can't find the container with id 50ac88c7be676e5c718c3c537c7542e1f3ff87fc527ae075c38c9a161142cf57 Dec 06 06:27:47 crc kubenswrapper[4823]: I1206 06:27:47.398339 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-57k6t" event={"ID":"5a2bb8a5-743e-42ed-9f30-850690a30e47","Type":"ContainerStarted","Data":"50ac88c7be676e5c718c3c537c7542e1f3ff87fc527ae075c38c9a161142cf57"} Dec 06 06:27:47 crc kubenswrapper[4823]: I1206 06:27:47.588257 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:47 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:47 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:47 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:47 crc kubenswrapper[4823]: I1206 06:27:47.588334 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:47 crc kubenswrapper[4823]: I1206 06:27:47.812953 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:47 crc kubenswrapper[4823]: I1206 06:27:47.818986 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-fwj5n" Dec 06 06:27:48 crc kubenswrapper[4823]: I1206 06:27:48.698710 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:48 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:48 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:48 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:48 crc kubenswrapper[4823]: I1206 06:27:48.699943 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:49 crc kubenswrapper[4823]: I1206 06:27:49.110779 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-57k6t" event={"ID":"5a2bb8a5-743e-42ed-9f30-850690a30e47","Type":"ContainerStarted","Data":"dcba45d931200545ce743ed70bf5e5204e5a4c16b5da23bd183a12ecdd35741a"} Dec 06 06:27:49 crc kubenswrapper[4823]: I1206 06:27:49.581806 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:49 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:49 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:49 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:49 crc kubenswrapper[4823]: I1206 06:27:49.581858 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:50 crc kubenswrapper[4823]: I1206 06:27:50.579890 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:50 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:50 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:50 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:50 crc kubenswrapper[4823]: I1206 06:27:50.580546 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:51 crc kubenswrapper[4823]: I1206 06:27:51.175478 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-57k6t" event={"ID":"5a2bb8a5-743e-42ed-9f30-850690a30e47","Type":"ContainerStarted","Data":"f6d0702f451f96325a803695c51d2374d767e4b35b76011f2899ee87ed704e1b"} Dec 06 06:27:51 crc kubenswrapper[4823]: I1206 06:27:51.296989 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-57k6t" podStartSLOduration=148.296970898 podStartE2EDuration="2m28.296970898s" podCreationTimestamp="2025-12-06 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:27:51.29519159 +0000 UTC m=+172.580943550" watchObservedRunningTime="2025-12-06 06:27:51.296970898 +0000 UTC m=+172.582722858" Dec 06 06:27:51 crc kubenswrapper[4823]: I1206 06:27:51.796711 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:51 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:51 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:51 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:51 crc kubenswrapper[4823]: I1206 06:27:51.796764 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:52 crc kubenswrapper[4823]: I1206 06:27:52.660276 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:52 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:52 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:52 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:52 crc kubenswrapper[4823]: I1206 06:27:52.660340 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:53 crc kubenswrapper[4823]: I1206 06:27:53.584079 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:53 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:53 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:53 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:53 crc kubenswrapper[4823]: I1206 06:27:53.584155 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:54 crc kubenswrapper[4823]: I1206 06:27:54.582372 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:54 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:54 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:54 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:54 crc kubenswrapper[4823]: I1206 06:27:54.582874 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:55 crc kubenswrapper[4823]: I1206 06:27:55.580045 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:55 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:55 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:55 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:55 crc kubenswrapper[4823]: I1206 06:27:55.580108 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:56 crc kubenswrapper[4823]: I1206 06:27:56.586374 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:56 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:56 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:56 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:56 crc kubenswrapper[4823]: I1206 06:27:56.586438 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:56 crc kubenswrapper[4823]: I1206 06:27:56.658259 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-9g789 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Dec 06 06:27:56 crc kubenswrapper[4823]: I1206 06:27:56.658320 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9g789" podUID="ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Dec 06 06:27:56 crc kubenswrapper[4823]: I1206 06:27:56.659718 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-9g789 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Dec 06 06:27:56 crc kubenswrapper[4823]: I1206 06:27:56.659791 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9g789" podUID="ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Dec 06 06:27:56 crc kubenswrapper[4823]: I1206 06:27:56.659854 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-9g789" Dec 06 06:27:56 crc kubenswrapper[4823]: I1206 06:27:56.660566 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-9g789 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Dec 06 06:27:56 crc kubenswrapper[4823]: I1206 06:27:56.660670 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9g789" podUID="ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Dec 06 06:27:56 crc kubenswrapper[4823]: I1206 06:27:56.660622 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"dd9658ce5da80535833e84b0decaaad9053f2ec5884ed9e68e2d302ca3b4ee07"} pod="openshift-console/downloads-7954f5f757-9g789" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 06 06:27:56 crc kubenswrapper[4823]: I1206 06:27:56.661060 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-9g789" podUID="ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51" containerName="download-server" containerID="cri-o://dd9658ce5da80535833e84b0decaaad9053f2ec5884ed9e68e2d302ca3b4ee07" gracePeriod=2 Dec 06 06:27:56 crc kubenswrapper[4823]: I1206 06:27:56.859899 4823 patch_prober.go:28] interesting pod/console-f9d7485db-wzsch container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 06 06:27:56 crc kubenswrapper[4823]: I1206 06:27:56.860383 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-wzsch" podUID="e802aa0a-cd13-43df-be69-40b0bca7200f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 06 06:27:57 crc kubenswrapper[4823]: I1206 06:27:57.620103 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:57 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:57 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:57 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:57 crc kubenswrapper[4823]: I1206 06:27:57.620198 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:58 crc kubenswrapper[4823]: I1206 06:27:58.286900 4823 generic.go:334] "Generic (PLEG): container finished" podID="ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51" containerID="dd9658ce5da80535833e84b0decaaad9053f2ec5884ed9e68e2d302ca3b4ee07" exitCode=0 Dec 06 06:27:58 crc kubenswrapper[4823]: I1206 06:27:58.286948 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9g789" event={"ID":"ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51","Type":"ContainerDied","Data":"dd9658ce5da80535833e84b0decaaad9053f2ec5884ed9e68e2d302ca3b4ee07"} Dec 06 06:27:58 crc kubenswrapper[4823]: I1206 06:27:58.579358 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:58 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:58 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:58 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:58 crc kubenswrapper[4823]: I1206 06:27:58.579432 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:27:59 crc kubenswrapper[4823]: I1206 06:27:59.590557 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:27:59 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:27:59 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:27:59 crc kubenswrapper[4823]: healthz check failed Dec 06 06:27:59 crc kubenswrapper[4823]: I1206 06:27:59.591163 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:28:00 crc kubenswrapper[4823]: I1206 06:28:00.584502 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:28:00 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:28:00 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:28:00 crc kubenswrapper[4823]: healthz check failed Dec 06 06:28:00 crc kubenswrapper[4823]: I1206 06:28:00.584571 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:28:00 crc kubenswrapper[4823]: I1206 06:28:00.737196 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:28:01 crc kubenswrapper[4823]: I1206 06:28:01.644786 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 06:28:01 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Dec 06 06:28:01 crc kubenswrapper[4823]: [+]process-running ok Dec 06 06:28:01 crc kubenswrapper[4823]: healthz check failed Dec 06 06:28:01 crc kubenswrapper[4823]: I1206 06:28:01.644863 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 06:28:02 crc kubenswrapper[4823]: I1206 06:28:02.579752 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-4rlt6" Dec 06 06:28:02 crc kubenswrapper[4823]: I1206 06:28:02.583182 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-4rlt6" Dec 06 06:28:06 crc kubenswrapper[4823]: I1206 06:28:06.051839 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:28:06 crc kubenswrapper[4823]: I1206 06:28:06.052183 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:28:06 crc kubenswrapper[4823]: I1206 06:28:06.216973 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 06:28:06 crc kubenswrapper[4823]: I1206 06:28:06.660803 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-9g789 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Dec 06 06:28:06 crc kubenswrapper[4823]: I1206 06:28:06.660885 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9g789" podUID="ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Dec 06 06:28:07 crc kubenswrapper[4823]: I1206 06:28:07.078481 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:28:07 crc kubenswrapper[4823]: I1206 06:28:07.082481 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:28:07 crc kubenswrapper[4823]: I1206 06:28:07.888418 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6bzcc" Dec 06 06:28:15 crc kubenswrapper[4823]: I1206 06:28:15.073422 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 06 06:28:15 crc kubenswrapper[4823]: E1206 06:28:15.074053 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1b835c-94db-42b1-b073-2cde9a83979b" containerName="pruner" Dec 06 06:28:15 crc kubenswrapper[4823]: I1206 06:28:15.074067 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1b835c-94db-42b1-b073-2cde9a83979b" containerName="pruner" Dec 06 06:28:15 crc kubenswrapper[4823]: E1206 06:28:15.074100 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c773a617-cbcd-4cc3-8f5c-1390032ef5da" containerName="pruner" Dec 06 06:28:15 crc kubenswrapper[4823]: I1206 06:28:15.074113 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c773a617-cbcd-4cc3-8f5c-1390032ef5da" containerName="pruner" Dec 06 06:28:15 crc kubenswrapper[4823]: I1206 06:28:15.074242 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c773a617-cbcd-4cc3-8f5c-1390032ef5da" containerName="pruner" Dec 06 06:28:15 crc kubenswrapper[4823]: I1206 06:28:15.074258 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="de1b835c-94db-42b1-b073-2cde9a83979b" containerName="pruner" Dec 06 06:28:15 crc kubenswrapper[4823]: I1206 06:28:15.074713 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 06:28:15 crc kubenswrapper[4823]: I1206 06:28:15.077448 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 06 06:28:15 crc kubenswrapper[4823]: I1206 06:28:15.078138 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 06 06:28:15 crc kubenswrapper[4823]: I1206 06:28:15.081603 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 06 06:28:15 crc kubenswrapper[4823]: I1206 06:28:15.102892 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cb5b4a3-6d65-4abc-84e2-f2d049ac3610-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4cb5b4a3-6d65-4abc-84e2-f2d049ac3610\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 06:28:15 crc kubenswrapper[4823]: I1206 06:28:15.103059 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb5b4a3-6d65-4abc-84e2-f2d049ac3610-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4cb5b4a3-6d65-4abc-84e2-f2d049ac3610\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 06:28:15 crc kubenswrapper[4823]: I1206 06:28:15.534799 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cb5b4a3-6d65-4abc-84e2-f2d049ac3610-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4cb5b4a3-6d65-4abc-84e2-f2d049ac3610\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 06:28:15 crc kubenswrapper[4823]: I1206 06:28:15.534881 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb5b4a3-6d65-4abc-84e2-f2d049ac3610-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4cb5b4a3-6d65-4abc-84e2-f2d049ac3610\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 06:28:15 crc kubenswrapper[4823]: I1206 06:28:15.535306 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cb5b4a3-6d65-4abc-84e2-f2d049ac3610-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4cb5b4a3-6d65-4abc-84e2-f2d049ac3610\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 06:28:15 crc kubenswrapper[4823]: I1206 06:28:15.636387 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb5b4a3-6d65-4abc-84e2-f2d049ac3610-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4cb5b4a3-6d65-4abc-84e2-f2d049ac3610\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 06:28:15 crc kubenswrapper[4823]: I1206 06:28:15.709002 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 06:28:16 crc kubenswrapper[4823]: I1206 06:28:16.658318 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-9g789 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Dec 06 06:28:16 crc kubenswrapper[4823]: I1206 06:28:16.658552 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9g789" podUID="ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Dec 06 06:28:19 crc kubenswrapper[4823]: I1206 06:28:19.082270 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 06 06:28:19 crc kubenswrapper[4823]: I1206 06:28:19.084272 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 06 06:28:19 crc kubenswrapper[4823]: I1206 06:28:19.085915 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 06 06:28:19 crc kubenswrapper[4823]: I1206 06:28:19.181285 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e73db1bc-017f-4907-b783-ee164734506e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e73db1bc-017f-4907-b783-ee164734506e\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 06:28:19 crc kubenswrapper[4823]: I1206 06:28:19.181414 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e73db1bc-017f-4907-b783-ee164734506e-kube-api-access\") pod \"installer-9-crc\" (UID: \"e73db1bc-017f-4907-b783-ee164734506e\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 06:28:19 crc kubenswrapper[4823]: I1206 06:28:19.181444 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e73db1bc-017f-4907-b783-ee164734506e-var-lock\") pod \"installer-9-crc\" (UID: \"e73db1bc-017f-4907-b783-ee164734506e\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 06:28:19 crc kubenswrapper[4823]: I1206 06:28:19.282696 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e73db1bc-017f-4907-b783-ee164734506e-kube-api-access\") pod \"installer-9-crc\" (UID: \"e73db1bc-017f-4907-b783-ee164734506e\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 06:28:19 crc kubenswrapper[4823]: I1206 06:28:19.282751 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e73db1bc-017f-4907-b783-ee164734506e-var-lock\") pod \"installer-9-crc\" (UID: \"e73db1bc-017f-4907-b783-ee164734506e\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 06:28:19 crc kubenswrapper[4823]: I1206 06:28:19.282792 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e73db1bc-017f-4907-b783-ee164734506e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e73db1bc-017f-4907-b783-ee164734506e\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 06:28:19 crc kubenswrapper[4823]: I1206 06:28:19.282860 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e73db1bc-017f-4907-b783-ee164734506e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e73db1bc-017f-4907-b783-ee164734506e\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 06:28:19 crc kubenswrapper[4823]: I1206 06:28:19.282897 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e73db1bc-017f-4907-b783-ee164734506e-var-lock\") pod \"installer-9-crc\" (UID: \"e73db1bc-017f-4907-b783-ee164734506e\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 06:28:19 crc kubenswrapper[4823]: I1206 06:28:19.302527 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e73db1bc-017f-4907-b783-ee164734506e-kube-api-access\") pod \"installer-9-crc\" (UID: \"e73db1bc-017f-4907-b783-ee164734506e\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 06:28:19 crc kubenswrapper[4823]: I1206 06:28:19.414561 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 06 06:28:19 crc kubenswrapper[4823]: E1206 06:28:19.596240 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 06 06:28:19 crc kubenswrapper[4823]: E1206 06:28:19.596403 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2786k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jrnhm_openshift-marketplace(ab79175b-ce4b-4ad8-863b-31fe71624804): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 06:28:19 crc kubenswrapper[4823]: E1206 06:28:19.597782 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-jrnhm" podUID="ab79175b-ce4b-4ad8-863b-31fe71624804" Dec 06 06:28:25 crc kubenswrapper[4823]: E1206 06:28:25.529213 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jrnhm" podUID="ab79175b-ce4b-4ad8-863b-31fe71624804" Dec 06 06:28:25 crc kubenswrapper[4823]: E1206 06:28:25.609791 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 06 06:28:25 crc kubenswrapper[4823]: E1206 06:28:25.610016 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qb99t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-jgghf_openshift-marketplace(125800d8-7679-4574-8992-181928f47efc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 06:28:25 crc kubenswrapper[4823]: E1206 06:28:25.611826 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-jgghf" podUID="125800d8-7679-4574-8992-181928f47efc" Dec 06 06:28:25 crc kubenswrapper[4823]: E1206 06:28:25.620031 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 06 06:28:25 crc kubenswrapper[4823]: E1206 06:28:25.620257 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2crgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-px8wk_openshift-marketplace(6edd27de-5a66-4fbb-ac77-6889ff93d1b4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 06:28:25 crc kubenswrapper[4823]: E1206 06:28:25.621359 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-px8wk" podUID="6edd27de-5a66-4fbb-ac77-6889ff93d1b4" Dec 06 06:28:25 crc kubenswrapper[4823]: E1206 06:28:25.627620 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 06 06:28:25 crc kubenswrapper[4823]: E1206 06:28:25.627798 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5m6tx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-vjc84_openshift-marketplace(dfb88fb7-5645-4804-a359-800d2b14fabe): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 06:28:25 crc kubenswrapper[4823]: E1206 06:28:25.628921 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-vjc84" podUID="dfb88fb7-5645-4804-a359-800d2b14fabe" Dec 06 06:28:26 crc kubenswrapper[4823]: I1206 06:28:26.660726 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-9g789 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Dec 06 06:28:26 crc kubenswrapper[4823]: I1206 06:28:26.661115 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9g789" podUID="ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Dec 06 06:28:27 crc kubenswrapper[4823]: E1206 06:28:27.174033 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-px8wk" podUID="6edd27de-5a66-4fbb-ac77-6889ff93d1b4" Dec 06 06:28:27 crc kubenswrapper[4823]: E1206 06:28:27.174161 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-jgghf" podUID="125800d8-7679-4574-8992-181928f47efc" Dec 06 06:28:27 crc kubenswrapper[4823]: E1206 06:28:27.174201 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-vjc84" podUID="dfb88fb7-5645-4804-a359-800d2b14fabe" Dec 06 06:28:27 crc kubenswrapper[4823]: E1206 06:28:27.318730 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 06 06:28:27 crc kubenswrapper[4823]: E1206 06:28:27.318916 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mmkcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-fqbpf_openshift-marketplace(924b1003-afd5-49e2-883d-12b314c93876): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 06:28:27 crc kubenswrapper[4823]: E1206 06:28:27.320187 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-fqbpf" podUID="924b1003-afd5-49e2-883d-12b314c93876" Dec 06 06:28:29 crc kubenswrapper[4823]: E1206 06:28:29.239882 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-fqbpf" podUID="924b1003-afd5-49e2-883d-12b314c93876" Dec 06 06:28:29 crc kubenswrapper[4823]: I1206 06:28:29.728509 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 06 06:28:29 crc kubenswrapper[4823]: W1206 06:28:29.740682 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode73db1bc_017f_4907_b783_ee164734506e.slice/crio-9010f9060bf50d8d21c133755a3847df9e45c91857083e751c4357dad24c7c05 WatchSource:0}: Error finding container 9010f9060bf50d8d21c133755a3847df9e45c91857083e751c4357dad24c7c05: Status 404 returned error can't find the container with id 9010f9060bf50d8d21c133755a3847df9e45c91857083e751c4357dad24c7c05 Dec 06 06:28:29 crc kubenswrapper[4823]: E1206 06:28:29.850493 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 06 06:28:29 crc kubenswrapper[4823]: E1206 06:28:29.851006 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8vxqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-2sgg7_openshift-marketplace(130f260b-b329-499b-a6ff-b15b96d8bf7d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 06:28:29 crc kubenswrapper[4823]: E1206 06:28:29.852773 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-2sgg7" podUID="130f260b-b329-499b-a6ff-b15b96d8bf7d" Dec 06 06:28:29 crc kubenswrapper[4823]: E1206 06:28:29.918246 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 06 06:28:29 crc kubenswrapper[4823]: E1206 06:28:29.918589 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrhwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xx2np_openshift-marketplace(c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 06:28:29 crc kubenswrapper[4823]: E1206 06:28:29.920207 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-xx2np" podUID="c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" Dec 06 06:28:29 crc kubenswrapper[4823]: I1206 06:28:29.932844 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 06 06:28:29 crc kubenswrapper[4823]: W1206 06:28:29.943590 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4cb5b4a3_6d65_4abc_84e2_f2d049ac3610.slice/crio-541625c9a9346115722db842f45b98e9003ae1c39a3f97916bcbe4e2c5a88efa WatchSource:0}: Error finding container 541625c9a9346115722db842f45b98e9003ae1c39a3f97916bcbe4e2c5a88efa: Status 404 returned error can't find the container with id 541625c9a9346115722db842f45b98e9003ae1c39a3f97916bcbe4e2c5a88efa Dec 06 06:28:29 crc kubenswrapper[4823]: E1206 06:28:29.948792 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 06 06:28:29 crc kubenswrapper[4823]: E1206 06:28:29.948943 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzcsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gxh4w_openshift-marketplace(6ade1bd9-4ca5-4910-8989-09b55a67bd0e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 06:28:29 crc kubenswrapper[4823]: E1206 06:28:29.950040 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gxh4w" podUID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" Dec 06 06:28:30 crc kubenswrapper[4823]: I1206 06:28:30.645983 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e73db1bc-017f-4907-b783-ee164734506e","Type":"ContainerStarted","Data":"d7ea0d301852c9c52a8c391b7e8dc553a4de8255da7c92670d578065483ebe90"} Dec 06 06:28:30 crc kubenswrapper[4823]: I1206 06:28:30.646372 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e73db1bc-017f-4907-b783-ee164734506e","Type":"ContainerStarted","Data":"9010f9060bf50d8d21c133755a3847df9e45c91857083e751c4357dad24c7c05"} Dec 06 06:28:30 crc kubenswrapper[4823]: I1206 06:28:30.648494 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4cb5b4a3-6d65-4abc-84e2-f2d049ac3610","Type":"ContainerStarted","Data":"0ed11531285a0df7507b65cfb2bcb917370645f141f8a95b3c1a8df061a64615"} Dec 06 06:28:30 crc kubenswrapper[4823]: I1206 06:28:30.648559 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4cb5b4a3-6d65-4abc-84e2-f2d049ac3610","Type":"ContainerStarted","Data":"541625c9a9346115722db842f45b98e9003ae1c39a3f97916bcbe4e2c5a88efa"} Dec 06 06:28:30 crc kubenswrapper[4823]: I1206 06:28:30.651573 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9g789" event={"ID":"ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51","Type":"ContainerStarted","Data":"2a8da67913bc0f1cd05c43052a7477d8e858f8d6e6d8f9b436afcfea01af7bcc"} Dec 06 06:28:30 crc kubenswrapper[4823]: I1206 06:28:30.652410 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-9g789 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Dec 06 06:28:30 crc kubenswrapper[4823]: I1206 06:28:30.652564 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9g789" podUID="ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Dec 06 06:28:30 crc kubenswrapper[4823]: E1206 06:28:30.661139 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xx2np" podUID="c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" Dec 06 06:28:30 crc kubenswrapper[4823]: E1206 06:28:30.661526 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2sgg7" podUID="130f260b-b329-499b-a6ff-b15b96d8bf7d" Dec 06 06:28:30 crc kubenswrapper[4823]: I1206 06:28:30.676860 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=11.676075753 podStartE2EDuration="11.676075753s" podCreationTimestamp="2025-12-06 06:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:28:30.669019551 +0000 UTC m=+211.954771511" watchObservedRunningTime="2025-12-06 06:28:30.676075753 +0000 UTC m=+211.961827713" Dec 06 06:28:30 crc kubenswrapper[4823]: I1206 06:28:30.702306 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=15.702284316 podStartE2EDuration="15.702284316s" podCreationTimestamp="2025-12-06 06:28:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:28:30.700993441 +0000 UTC m=+211.986745401" watchObservedRunningTime="2025-12-06 06:28:30.702284316 +0000 UTC m=+211.988036276" Dec 06 06:28:31 crc kubenswrapper[4823]: I1206 06:28:31.249967 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qcghw"] Dec 06 06:28:31 crc kubenswrapper[4823]: I1206 06:28:31.657997 4823 generic.go:334] "Generic (PLEG): container finished" podID="4cb5b4a3-6d65-4abc-84e2-f2d049ac3610" containerID="0ed11531285a0df7507b65cfb2bcb917370645f141f8a95b3c1a8df061a64615" exitCode=0 Dec 06 06:28:31 crc kubenswrapper[4823]: I1206 06:28:31.658112 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4cb5b4a3-6d65-4abc-84e2-f2d049ac3610","Type":"ContainerDied","Data":"0ed11531285a0df7507b65cfb2bcb917370645f141f8a95b3c1a8df061a64615"} Dec 06 06:28:31 crc kubenswrapper[4823]: I1206 06:28:31.658593 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-9g789" Dec 06 06:28:31 crc kubenswrapper[4823]: I1206 06:28:31.659384 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-9g789 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Dec 06 06:28:31 crc kubenswrapper[4823]: I1206 06:28:31.659480 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9g789" podUID="ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Dec 06 06:28:32 crc kubenswrapper[4823]: I1206 06:28:32.664353 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-9g789 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Dec 06 06:28:32 crc kubenswrapper[4823]: I1206 06:28:32.664423 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9g789" podUID="ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Dec 06 06:28:32 crc kubenswrapper[4823]: I1206 06:28:32.957982 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 06:28:33 crc kubenswrapper[4823]: I1206 06:28:33.092279 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb5b4a3-6d65-4abc-84e2-f2d049ac3610-kube-api-access\") pod \"4cb5b4a3-6d65-4abc-84e2-f2d049ac3610\" (UID: \"4cb5b4a3-6d65-4abc-84e2-f2d049ac3610\") " Dec 06 06:28:33 crc kubenswrapper[4823]: I1206 06:28:33.092427 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cb5b4a3-6d65-4abc-84e2-f2d049ac3610-kubelet-dir\") pod \"4cb5b4a3-6d65-4abc-84e2-f2d049ac3610\" (UID: \"4cb5b4a3-6d65-4abc-84e2-f2d049ac3610\") " Dec 06 06:28:33 crc kubenswrapper[4823]: I1206 06:28:33.092736 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cb5b4a3-6d65-4abc-84e2-f2d049ac3610-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4cb5b4a3-6d65-4abc-84e2-f2d049ac3610" (UID: "4cb5b4a3-6d65-4abc-84e2-f2d049ac3610"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:28:33 crc kubenswrapper[4823]: I1206 06:28:33.099197 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cb5b4a3-6d65-4abc-84e2-f2d049ac3610-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4cb5b4a3-6d65-4abc-84e2-f2d049ac3610" (UID: "4cb5b4a3-6d65-4abc-84e2-f2d049ac3610"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:28:33 crc kubenswrapper[4823]: I1206 06:28:33.194504 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb5b4a3-6d65-4abc-84e2-f2d049ac3610-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 06:28:33 crc kubenswrapper[4823]: I1206 06:28:33.194552 4823 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cb5b4a3-6d65-4abc-84e2-f2d049ac3610-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 06 06:28:33 crc kubenswrapper[4823]: I1206 06:28:33.683058 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4cb5b4a3-6d65-4abc-84e2-f2d049ac3610","Type":"ContainerDied","Data":"541625c9a9346115722db842f45b98e9003ae1c39a3f97916bcbe4e2c5a88efa"} Dec 06 06:28:33 crc kubenswrapper[4823]: I1206 06:28:33.683111 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="541625c9a9346115722db842f45b98e9003ae1c39a3f97916bcbe4e2c5a88efa" Dec 06 06:28:33 crc kubenswrapper[4823]: I1206 06:28:33.683183 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 06:28:36 crc kubenswrapper[4823]: I1206 06:28:36.052624 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:28:36 crc kubenswrapper[4823]: I1206 06:28:36.053231 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:28:36 crc kubenswrapper[4823]: I1206 06:28:36.053307 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 06:28:36 crc kubenswrapper[4823]: I1206 06:28:36.054311 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b"} pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 06:28:36 crc kubenswrapper[4823]: I1206 06:28:36.054381 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" containerID="cri-o://4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b" gracePeriod=600 Dec 06 06:28:36 crc kubenswrapper[4823]: I1206 06:28:36.657871 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-9g789 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Dec 06 06:28:36 crc kubenswrapper[4823]: I1206 06:28:36.657916 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-9g789 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Dec 06 06:28:36 crc kubenswrapper[4823]: I1206 06:28:36.657978 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9g789" podUID="ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Dec 06 06:28:36 crc kubenswrapper[4823]: I1206 06:28:36.657926 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9g789" podUID="ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Dec 06 06:28:40 crc kubenswrapper[4823]: I1206 06:28:40.712992 4823 generic.go:334] "Generic (PLEG): container finished" podID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerID="4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b" exitCode=0 Dec 06 06:28:40 crc kubenswrapper[4823]: I1206 06:28:40.713513 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerDied","Data":"4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b"} Dec 06 06:28:40 crc kubenswrapper[4823]: I1206 06:28:40.713537 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerStarted","Data":"3ccae4427dbcfd162a392c0f60c728a29ff44263d7626709954156668dc178c3"} Dec 06 06:28:43 crc kubenswrapper[4823]: I1206 06:28:43.742645 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-px8wk" event={"ID":"6edd27de-5a66-4fbb-ac77-6889ff93d1b4","Type":"ContainerStarted","Data":"609edb194d7a69422930d3f463c1862b7bd8032d9ab1ba6c3fedb3bc7d0b496e"} Dec 06 06:28:43 crc kubenswrapper[4823]: I1206 06:28:43.746552 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgghf" event={"ID":"125800d8-7679-4574-8992-181928f47efc","Type":"ContainerStarted","Data":"da16365e8795c221345d3a64e638e5bde6f1d7c8d77bb1b2522aab723ad06cb5"} Dec 06 06:28:45 crc kubenswrapper[4823]: I1206 06:28:45.169008 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xx2np" event={"ID":"c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1","Type":"ContainerStarted","Data":"b390fef45b312792540aa825b3142bc437b49d33b2b0946abcb74474b8fbb7ed"} Dec 06 06:28:45 crc kubenswrapper[4823]: I1206 06:28:45.176268 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqbpf" event={"ID":"924b1003-afd5-49e2-883d-12b314c93876","Type":"ContainerStarted","Data":"799773be94fe68c09b32f784c50c6bbeb0cf65cf70be459bf50ab3d70b711668"} Dec 06 06:28:45 crc kubenswrapper[4823]: I1206 06:28:45.178342 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrnhm" event={"ID":"ab79175b-ce4b-4ad8-863b-31fe71624804","Type":"ContainerStarted","Data":"b7cb5eccb68e738073c2ccc7d5f98b988ff0f35033d6cf47e8f5e94101124ac2"} Dec 06 06:28:45 crc kubenswrapper[4823]: I1206 06:28:45.179746 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vjc84" event={"ID":"dfb88fb7-5645-4804-a359-800d2b14fabe","Type":"ContainerStarted","Data":"e0c8b22dc08e446d454717b7ebca09c488e1e7c9c65d0929655fe12233f56703"} Dec 06 06:28:46 crc kubenswrapper[4823]: I1206 06:28:46.337736 4823 generic.go:334] "Generic (PLEG): container finished" podID="6edd27de-5a66-4fbb-ac77-6889ff93d1b4" containerID="609edb194d7a69422930d3f463c1862b7bd8032d9ab1ba6c3fedb3bc7d0b496e" exitCode=0 Dec 06 06:28:46 crc kubenswrapper[4823]: I1206 06:28:46.337795 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-px8wk" event={"ID":"6edd27de-5a66-4fbb-ac77-6889ff93d1b4","Type":"ContainerDied","Data":"609edb194d7a69422930d3f463c1862b7bd8032d9ab1ba6c3fedb3bc7d0b496e"} Dec 06 06:28:46 crc kubenswrapper[4823]: I1206 06:28:46.840999 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-9g789 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Dec 06 06:28:46 crc kubenswrapper[4823]: I1206 06:28:46.841124 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-9g789 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Dec 06 06:28:46 crc kubenswrapper[4823]: I1206 06:28:46.841376 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9g789" podUID="ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Dec 06 06:28:46 crc kubenswrapper[4823]: I1206 06:28:46.841309 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9g789" podUID="ea1af4d1-9e9f-4d1a-9c7b-1384a65bff51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Dec 06 06:28:48 crc kubenswrapper[4823]: I1206 06:28:48.535587 4823 generic.go:334] "Generic (PLEG): container finished" podID="ab79175b-ce4b-4ad8-863b-31fe71624804" containerID="b7cb5eccb68e738073c2ccc7d5f98b988ff0f35033d6cf47e8f5e94101124ac2" exitCode=0 Dec 06 06:28:48 crc kubenswrapper[4823]: I1206 06:28:48.535885 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrnhm" event={"ID":"ab79175b-ce4b-4ad8-863b-31fe71624804","Type":"ContainerDied","Data":"b7cb5eccb68e738073c2ccc7d5f98b988ff0f35033d6cf47e8f5e94101124ac2"} Dec 06 06:28:49 crc kubenswrapper[4823]: I1206 06:28:49.546831 4823 generic.go:334] "Generic (PLEG): container finished" podID="924b1003-afd5-49e2-883d-12b314c93876" containerID="799773be94fe68c09b32f784c50c6bbeb0cf65cf70be459bf50ab3d70b711668" exitCode=0 Dec 06 06:28:49 crc kubenswrapper[4823]: I1206 06:28:49.546909 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqbpf" event={"ID":"924b1003-afd5-49e2-883d-12b314c93876","Type":"ContainerDied","Data":"799773be94fe68c09b32f784c50c6bbeb0cf65cf70be459bf50ab3d70b711668"} Dec 06 06:28:49 crc kubenswrapper[4823]: I1206 06:28:49.552925 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sgg7" event={"ID":"130f260b-b329-499b-a6ff-b15b96d8bf7d","Type":"ContainerStarted","Data":"c70d0fe9c62f96814df3dce5d2a65a3fb0346c1551ee3091433158d65b57a812"} Dec 06 06:28:49 crc kubenswrapper[4823]: I1206 06:28:49.563615 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxh4w" event={"ID":"6ade1bd9-4ca5-4910-8989-09b55a67bd0e","Type":"ContainerStarted","Data":"4226e67bd5d5851f8ea0e6f4524cba53a16daf6e764efe88470df0809d730a58"} Dec 06 06:28:49 crc kubenswrapper[4823]: I1206 06:28:49.573859 4823 generic.go:334] "Generic (PLEG): container finished" podID="c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" containerID="b390fef45b312792540aa825b3142bc437b49d33b2b0946abcb74474b8fbb7ed" exitCode=0 Dec 06 06:28:49 crc kubenswrapper[4823]: I1206 06:28:49.573908 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xx2np" event={"ID":"c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1","Type":"ContainerDied","Data":"b390fef45b312792540aa825b3142bc437b49d33b2b0946abcb74474b8fbb7ed"} Dec 06 06:28:53 crc kubenswrapper[4823]: I1206 06:28:53.606748 4823 generic.go:334] "Generic (PLEG): container finished" podID="130f260b-b329-499b-a6ff-b15b96d8bf7d" containerID="c70d0fe9c62f96814df3dce5d2a65a3fb0346c1551ee3091433158d65b57a812" exitCode=0 Dec 06 06:28:53 crc kubenswrapper[4823]: I1206 06:28:53.606835 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sgg7" event={"ID":"130f260b-b329-499b-a6ff-b15b96d8bf7d","Type":"ContainerDied","Data":"c70d0fe9c62f96814df3dce5d2a65a3fb0346c1551ee3091433158d65b57a812"} Dec 06 06:28:53 crc kubenswrapper[4823]: I1206 06:28:53.611745 4823 generic.go:334] "Generic (PLEG): container finished" podID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" containerID="4226e67bd5d5851f8ea0e6f4524cba53a16daf6e764efe88470df0809d730a58" exitCode=0 Dec 06 06:28:53 crc kubenswrapper[4823]: I1206 06:28:53.611803 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxh4w" event={"ID":"6ade1bd9-4ca5-4910-8989-09b55a67bd0e","Type":"ContainerDied","Data":"4226e67bd5d5851f8ea0e6f4524cba53a16daf6e764efe88470df0809d730a58"} Dec 06 06:28:54 crc kubenswrapper[4823]: I1206 06:28:54.619038 4823 generic.go:334] "Generic (PLEG): container finished" podID="dfb88fb7-5645-4804-a359-800d2b14fabe" containerID="e0c8b22dc08e446d454717b7ebca09c488e1e7c9c65d0929655fe12233f56703" exitCode=0 Dec 06 06:28:54 crc kubenswrapper[4823]: I1206 06:28:54.619100 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vjc84" event={"ID":"dfb88fb7-5645-4804-a359-800d2b14fabe","Type":"ContainerDied","Data":"e0c8b22dc08e446d454717b7ebca09c488e1e7c9c65d0929655fe12233f56703"} Dec 06 06:28:54 crc kubenswrapper[4823]: I1206 06:28:54.624268 4823 generic.go:334] "Generic (PLEG): container finished" podID="125800d8-7679-4574-8992-181928f47efc" containerID="da16365e8795c221345d3a64e638e5bde6f1d7c8d77bb1b2522aab723ad06cb5" exitCode=0 Dec 06 06:28:54 crc kubenswrapper[4823]: I1206 06:28:54.624310 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgghf" event={"ID":"125800d8-7679-4574-8992-181928f47efc","Type":"ContainerDied","Data":"da16365e8795c221345d3a64e638e5bde6f1d7c8d77bb1b2522aab723ad06cb5"} Dec 06 06:28:55 crc kubenswrapper[4823]: I1206 06:28:55.632202 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-px8wk" event={"ID":"6edd27de-5a66-4fbb-ac77-6889ff93d1b4","Type":"ContainerStarted","Data":"3862aa8df309141eb51d164ea790c024b9b4d76002ba74f9f681c19c89706608"} Dec 06 06:28:56 crc kubenswrapper[4823]: I1206 06:28:56.283998 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" podUID="c9ad3b5d-6578-4dcd-9663-9ecf80e24b52" containerName="oauth-openshift" containerID="cri-o://89b2d8bb1069cb12c0f452071c50628dfab55c6cff0b0a1f94b1e14390797359" gracePeriod=15 Dec 06 06:28:56 crc kubenswrapper[4823]: I1206 06:28:56.655082 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-px8wk" podStartSLOduration=7.065997647 podStartE2EDuration="1m20.655064384s" podCreationTimestamp="2025-12-06 06:27:36 +0000 UTC" firstStartedPulling="2025-12-06 06:27:40.840858095 +0000 UTC m=+162.126610065" lastFinishedPulling="2025-12-06 06:28:54.429924842 +0000 UTC m=+235.715676802" observedRunningTime="2025-12-06 06:28:56.652680309 +0000 UTC m=+237.938432279" watchObservedRunningTime="2025-12-06 06:28:56.655064384 +0000 UTC m=+237.940816344" Dec 06 06:28:56 crc kubenswrapper[4823]: I1206 06:28:56.666599 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-9g789" Dec 06 06:28:56 crc kubenswrapper[4823]: I1206 06:28:56.788305 4823 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-qcghw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Dec 06 06:28:56 crc kubenswrapper[4823]: I1206 06:28:56.788682 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" podUID="c9ad3b5d-6578-4dcd-9663-9ecf80e24b52" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Dec 06 06:28:57 crc kubenswrapper[4823]: I1206 06:28:57.395403 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-px8wk" Dec 06 06:28:57 crc kubenswrapper[4823]: I1206 06:28:57.395454 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-px8wk" Dec 06 06:28:57 crc kubenswrapper[4823]: I1206 06:28:57.644925 4823 generic.go:334] "Generic (PLEG): container finished" podID="c9ad3b5d-6578-4dcd-9663-9ecf80e24b52" containerID="89b2d8bb1069cb12c0f452071c50628dfab55c6cff0b0a1f94b1e14390797359" exitCode=0 Dec 06 06:28:57 crc kubenswrapper[4823]: I1206 06:28:57.645559 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" event={"ID":"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52","Type":"ContainerDied","Data":"89b2d8bb1069cb12c0f452071c50628dfab55c6cff0b0a1f94b1e14390797359"} Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.233323 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.260331 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6b9699fff8-j72vz"] Dec 06 06:28:58 crc kubenswrapper[4823]: E1206 06:28:58.260585 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cb5b4a3-6d65-4abc-84e2-f2d049ac3610" containerName="pruner" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.260604 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cb5b4a3-6d65-4abc-84e2-f2d049ac3610" containerName="pruner" Dec 06 06:28:58 crc kubenswrapper[4823]: E1206 06:28:58.260630 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad3b5d-6578-4dcd-9663-9ecf80e24b52" containerName="oauth-openshift" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.260639 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad3b5d-6578-4dcd-9663-9ecf80e24b52" containerName="oauth-openshift" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.260817 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad3b5d-6578-4dcd-9663-9ecf80e24b52" containerName="oauth-openshift" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.260834 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cb5b4a3-6d65-4abc-84e2-f2d049ac3610" containerName="pruner" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.261296 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.281613 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6b9699fff8-j72vz"] Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.341629 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-user-template-error\") pod \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.341781 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-ocp-branding-template\") pod \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.341830 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-user-idp-0-file-data\") pod \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.341888 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-session\") pod \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.341924 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5n56\" (UniqueName: \"kubernetes.io/projected/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-kube-api-access-l5n56\") pod \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.341944 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-audit-policies\") pod \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.342934 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-serving-cert\") pod \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.342965 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-trusted-ca-bundle\") pod \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.342996 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-user-template-login\") pod \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.343017 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-service-ca\") pod \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.343039 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-cliconfig\") pod \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.343064 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-user-template-provider-selection\") pod \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.343199 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-audit-dir\") pod \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.343226 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-router-certs\") pod \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\" (UID: \"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52\") " Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.343615 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52" (UID: "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.343762 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52" (UID: "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.344280 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52" (UID: "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.344354 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52" (UID: "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.344395 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52" (UID: "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.353769 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52" (UID: "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.357314 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52" (UID: "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.358228 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52" (UID: "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.358841 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52" (UID: "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.358642 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-kube-api-access-l5n56" (OuterVolumeSpecName: "kube-api-access-l5n56") pod "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52" (UID: "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52"). InnerVolumeSpecName "kube-api-access-l5n56". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.359718 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52" (UID: "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.359808 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52" (UID: "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.359924 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52" (UID: "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.360272 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52" (UID: "c9ad3b5d-6578-4dcd-9663-9ecf80e24b52"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.445523 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-user-template-error\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.445616 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb3845c6-4a50-4f39-80c5-b24773bc931b-audit-dir\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.445649 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.445730 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-user-template-login\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.445775 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.445809 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.445839 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.445865 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.445908 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fb3845c6-4a50-4f39-80c5-b24773bc931b-audit-policies\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.445937 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.445968 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.446003 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frst9\" (UniqueName: \"kubernetes.io/projected/fb3845c6-4a50-4f39-80c5-b24773bc931b-kube-api-access-frst9\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.446047 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-system-session\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.446106 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.446178 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.446196 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5n56\" (UniqueName: \"kubernetes.io/projected/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-kube-api-access-l5n56\") on node \"crc\" DevicePath \"\"" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.446213 4823 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.446226 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.446243 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.446259 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.446273 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.446287 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.446303 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.446323 4823 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.446337 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.446351 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.446365 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.446379 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.452290 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-px8wk" podUID="6edd27de-5a66-4fbb-ac77-6889ff93d1b4" containerName="registry-server" probeResult="failure" output=< Dec 06 06:28:58 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Dec 06 06:28:58 crc kubenswrapper[4823]: > Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.547758 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.547819 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-user-template-error\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.547843 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb3845c6-4a50-4f39-80c5-b24773bc931b-audit-dir\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.547861 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.547885 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-user-template-login\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.547910 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.547926 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.547944 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.547960 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.547986 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.548002 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fb3845c6-4a50-4f39-80c5-b24773bc931b-audit-policies\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.548025 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.548063 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frst9\" (UniqueName: \"kubernetes.io/projected/fb3845c6-4a50-4f39-80c5-b24773bc931b-kube-api-access-frst9\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.548100 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-system-session\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.548496 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb3845c6-4a50-4f39-80c5-b24773bc931b-audit-dir\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.549975 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.549991 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fb3845c6-4a50-4f39-80c5-b24773bc931b-audit-policies\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.550847 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.551220 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.560756 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.561023 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-user-template-error\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.561585 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.562502 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.562561 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.562707 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.562810 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-system-session\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.564283 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fb3845c6-4a50-4f39-80c5-b24773bc931b-v4-0-config-user-template-login\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.571995 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frst9\" (UniqueName: \"kubernetes.io/projected/fb3845c6-4a50-4f39-80c5-b24773bc931b-kube-api-access-frst9\") pod \"oauth-openshift-6b9699fff8-j72vz\" (UID: \"fb3845c6-4a50-4f39-80c5-b24773bc931b\") " pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.576923 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.657855 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" event={"ID":"c9ad3b5d-6578-4dcd-9663-9ecf80e24b52","Type":"ContainerDied","Data":"b16de22ca61469fc2642eec1ee8b674ddd59a61e6745c38545ba715654feb47c"} Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.657915 4823 scope.go:117] "RemoveContainer" containerID="89b2d8bb1069cb12c0f452071c50628dfab55c6cff0b0a1f94b1e14390797359" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.658200 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qcghw" Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.712127 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qcghw"] Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.715903 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qcghw"] Dec 06 06:28:58 crc kubenswrapper[4823]: I1206 06:28:58.787445 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6b9699fff8-j72vz"] Dec 06 06:28:59 crc kubenswrapper[4823]: I1206 06:28:59.152180 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9ad3b5d-6578-4dcd-9663-9ecf80e24b52" path="/var/lib/kubelet/pods/c9ad3b5d-6578-4dcd-9663-9ecf80e24b52/volumes" Dec 06 06:28:59 crc kubenswrapper[4823]: I1206 06:28:59.666638 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" event={"ID":"fb3845c6-4a50-4f39-80c5-b24773bc931b","Type":"ContainerStarted","Data":"b38e81b897c05541ccebe7278979db88ffeef5567ca0b5459546d59ad5f8a0a0"} Dec 06 06:29:01 crc kubenswrapper[4823]: I1206 06:29:01.677775 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" event={"ID":"fb3845c6-4a50-4f39-80c5-b24773bc931b","Type":"ContainerStarted","Data":"08a1c975782090f12719b96394ce8b54a50dfe84f78dc72c8f1d7d7a3bb464ed"} Dec 06 06:29:01 crc kubenswrapper[4823]: I1206 06:29:01.680401 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xx2np" event={"ID":"c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1","Type":"ContainerStarted","Data":"9d3c9a7253672fd5bff021c5538709eb0e0ff5876d6cc45672e2b71e1bfcf502"} Dec 06 06:29:02 crc kubenswrapper[4823]: I1206 06:29:02.685138 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:29:02 crc kubenswrapper[4823]: I1206 06:29:02.691697 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" Dec 06 06:29:02 crc kubenswrapper[4823]: I1206 06:29:02.704560 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6b9699fff8-j72vz" podStartSLOduration=31.704540591 podStartE2EDuration="31.704540591s" podCreationTimestamp="2025-12-06 06:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:29:02.704035697 +0000 UTC m=+243.989787667" watchObservedRunningTime="2025-12-06 06:29:02.704540591 +0000 UTC m=+243.990292551" Dec 06 06:29:02 crc kubenswrapper[4823]: I1206 06:29:02.731970 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xx2np" podStartSLOduration=11.292738261 podStartE2EDuration="1m28.731951348s" podCreationTimestamp="2025-12-06 06:27:34 +0000 UTC" firstStartedPulling="2025-12-06 06:27:40.892215844 +0000 UTC m=+162.177967804" lastFinishedPulling="2025-12-06 06:28:58.331428931 +0000 UTC m=+239.617180891" observedRunningTime="2025-12-06 06:29:02.731404123 +0000 UTC m=+244.017156093" watchObservedRunningTime="2025-12-06 06:29:02.731951348 +0000 UTC m=+244.017703308" Dec 06 06:29:04 crc kubenswrapper[4823]: I1206 06:29:04.933289 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xx2np" Dec 06 06:29:04 crc kubenswrapper[4823]: I1206 06:29:04.933635 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xx2np" Dec 06 06:29:05 crc kubenswrapper[4823]: I1206 06:29:05.008366 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xx2np" Dec 06 06:29:05 crc kubenswrapper[4823]: I1206 06:29:05.716375 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vjc84" event={"ID":"dfb88fb7-5645-4804-a359-800d2b14fabe","Type":"ContainerStarted","Data":"7d69131582340276a6fbc8af0585609689b346c16efc3b432ee650466861d3d4"} Dec 06 06:29:05 crc kubenswrapper[4823]: I1206 06:29:05.719349 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgghf" event={"ID":"125800d8-7679-4574-8992-181928f47efc","Type":"ContainerStarted","Data":"1a3955f2dec87d57d871245ed95cbc964516943188246f89e95a97aab0ebadb2"} Dec 06 06:29:05 crc kubenswrapper[4823]: I1206 06:29:05.721573 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqbpf" event={"ID":"924b1003-afd5-49e2-883d-12b314c93876","Type":"ContainerStarted","Data":"f654a12fd043a64c62e8bad1e81393dcf67e75dc15f1ba18b6a6f3b14d0c366e"} Dec 06 06:29:05 crc kubenswrapper[4823]: I1206 06:29:05.723660 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrnhm" event={"ID":"ab79175b-ce4b-4ad8-863b-31fe71624804","Type":"ContainerStarted","Data":"9faf02b61affc5d5d01789ec4d165945787d93fbec81571cc00cdb583263f46a"} Dec 06 06:29:05 crc kubenswrapper[4823]: I1206 06:29:05.726143 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sgg7" event={"ID":"130f260b-b329-499b-a6ff-b15b96d8bf7d","Type":"ContainerStarted","Data":"5fc767d024b5ba187a9afb38a539e696307eacf30aa01afc02e629deafc7d815"} Dec 06 06:29:05 crc kubenswrapper[4823]: I1206 06:29:05.728492 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxh4w" event={"ID":"6ade1bd9-4ca5-4910-8989-09b55a67bd0e","Type":"ContainerStarted","Data":"77bb5d28a309e3beaa27ea2f770567dee0c7dcf761b84621da489174ec39b279"} Dec 06 06:29:05 crc kubenswrapper[4823]: I1206 06:29:05.737720 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vjc84" podStartSLOduration=5.865478715 podStartE2EDuration="1m28.737700094s" podCreationTimestamp="2025-12-06 06:27:37 +0000 UTC" firstStartedPulling="2025-12-06 06:27:41.981043632 +0000 UTC m=+163.266795592" lastFinishedPulling="2025-12-06 06:29:04.853265011 +0000 UTC m=+246.139016971" observedRunningTime="2025-12-06 06:29:05.737288043 +0000 UTC m=+247.023039993" watchObservedRunningTime="2025-12-06 06:29:05.737700094 +0000 UTC m=+247.023452054" Dec 06 06:29:05 crc kubenswrapper[4823]: I1206 06:29:05.759195 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jrnhm" podStartSLOduration=5.5265411669999995 podStartE2EDuration="1m29.759182109s" podCreationTimestamp="2025-12-06 06:27:36 +0000 UTC" firstStartedPulling="2025-12-06 06:27:40.82412529 +0000 UTC m=+162.109877250" lastFinishedPulling="2025-12-06 06:29:05.056766232 +0000 UTC m=+246.342518192" observedRunningTime="2025-12-06 06:29:05.756110486 +0000 UTC m=+247.041862456" watchObservedRunningTime="2025-12-06 06:29:05.759182109 +0000 UTC m=+247.044934069" Dec 06 06:29:05 crc kubenswrapper[4823]: I1206 06:29:05.784325 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gxh4w" podStartSLOduration=7.402986541 podStartE2EDuration="1m32.784305353s" podCreationTimestamp="2025-12-06 06:27:33 +0000 UTC" firstStartedPulling="2025-12-06 06:27:39.741974982 +0000 UTC m=+161.027726942" lastFinishedPulling="2025-12-06 06:29:05.123293794 +0000 UTC m=+246.409045754" observedRunningTime="2025-12-06 06:29:05.781101486 +0000 UTC m=+247.066853456" watchObservedRunningTime="2025-12-06 06:29:05.784305353 +0000 UTC m=+247.070057313" Dec 06 06:29:05 crc kubenswrapper[4823]: I1206 06:29:05.826935 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2sgg7" podStartSLOduration=8.517430408 podStartE2EDuration="1m32.826903593s" podCreationTimestamp="2025-12-06 06:27:33 +0000 UTC" firstStartedPulling="2025-12-06 06:27:40.823795461 +0000 UTC m=+162.109547421" lastFinishedPulling="2025-12-06 06:29:05.133268646 +0000 UTC m=+246.419020606" observedRunningTime="2025-12-06 06:29:05.815498253 +0000 UTC m=+247.101250213" watchObservedRunningTime="2025-12-06 06:29:05.826903593 +0000 UTC m=+247.112655553" Dec 06 06:29:05 crc kubenswrapper[4823]: I1206 06:29:05.839382 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fqbpf" podStartSLOduration=7.6684181989999995 podStartE2EDuration="1m31.839343162s" podCreationTimestamp="2025-12-06 06:27:34 +0000 UTC" firstStartedPulling="2025-12-06 06:27:40.885882821 +0000 UTC m=+162.171634791" lastFinishedPulling="2025-12-06 06:29:05.056807804 +0000 UTC m=+246.342559754" observedRunningTime="2025-12-06 06:29:05.838251102 +0000 UTC m=+247.124003062" watchObservedRunningTime="2025-12-06 06:29:05.839343162 +0000 UTC m=+247.125095122" Dec 06 06:29:05 crc kubenswrapper[4823]: I1206 06:29:05.859864 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jgghf" podStartSLOduration=6.057819143 podStartE2EDuration="1m28.85984451s" podCreationTimestamp="2025-12-06 06:27:37 +0000 UTC" firstStartedPulling="2025-12-06 06:27:41.984210889 +0000 UTC m=+163.269962849" lastFinishedPulling="2025-12-06 06:29:04.786236256 +0000 UTC m=+246.071988216" observedRunningTime="2025-12-06 06:29:05.859125791 +0000 UTC m=+247.144877761" watchObservedRunningTime="2025-12-06 06:29:05.85984451 +0000 UTC m=+247.145596470" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.451468 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-px8wk" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.513001 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-px8wk" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.527616 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jrnhm" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.527686 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jrnhm" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.582686 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jrnhm" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.837304 4823 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.837631 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d" gracePeriod=15 Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.837720 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0" gracePeriod=15 Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.837653 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af" gracePeriod=15 Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.837736 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104" gracePeriod=15 Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.837882 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8" gracePeriod=15 Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.840605 4823 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 06 06:29:07 crc kubenswrapper[4823]: E1206 06:29:07.840891 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.840908 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 06 06:29:07 crc kubenswrapper[4823]: E1206 06:29:07.840918 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.840926 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 06 06:29:07 crc kubenswrapper[4823]: E1206 06:29:07.840935 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.840942 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 06 06:29:07 crc kubenswrapper[4823]: E1206 06:29:07.840949 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.840955 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 06 06:29:07 crc kubenswrapper[4823]: E1206 06:29:07.840970 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.840976 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 06 06:29:07 crc kubenswrapper[4823]: E1206 06:29:07.840983 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.840989 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 06 06:29:07 crc kubenswrapper[4823]: E1206 06:29:07.840997 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.841003 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.841101 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.841111 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.841121 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.841132 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.841141 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.841329 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.842541 4823 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.843402 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.847008 4823 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Dec 06 06:29:07 crc kubenswrapper[4823]: E1206 06:29:07.874596 4823 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.981555 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.981919 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.981951 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.982078 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.982213 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.982321 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.982544 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 06:29:07 crc kubenswrapper[4823]: I1206 06:29:07.982579 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:29:08 crc kubenswrapper[4823]: I1206 06:29:08.025882 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vjc84" Dec 06 06:29:08 crc kubenswrapper[4823]: I1206 06:29:08.025938 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vjc84" Dec 06 06:29:08 crc kubenswrapper[4823]: I1206 06:29:08.083592 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 06:29:08 crc kubenswrapper[4823]: I1206 06:29:08.083710 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:29:08 crc kubenswrapper[4823]: I1206 06:29:08.083731 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 06:29:08 crc kubenswrapper[4823]: I1206 06:29:08.083811 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:29:08 crc kubenswrapper[4823]: I1206 06:29:08.083827 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 06:29:08 crc kubenswrapper[4823]: I1206 06:29:08.083766 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 06:29:08 crc kubenswrapper[4823]: I1206 06:29:08.083946 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 06:29:08 crc kubenswrapper[4823]: I1206 06:29:08.083979 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:29:08 crc kubenswrapper[4823]: I1206 06:29:08.084017 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 06:29:08 crc kubenswrapper[4823]: I1206 06:29:08.084023 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 06:29:08 crc kubenswrapper[4823]: I1206 06:29:08.084073 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 06:29:08 crc kubenswrapper[4823]: I1206 06:29:08.084090 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:29:08 crc kubenswrapper[4823]: I1206 06:29:08.084114 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 06:29:08 crc kubenswrapper[4823]: I1206 06:29:08.084149 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:29:08 crc kubenswrapper[4823]: I1206 06:29:08.084167 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:29:08 crc kubenswrapper[4823]: I1206 06:29:08.084166 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 06:29:08 crc kubenswrapper[4823]: I1206 06:29:08.176086 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 06:29:08 crc kubenswrapper[4823]: W1206 06:29:08.201008 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-8293c849cca053703c12a2d9b01efb3bbbb92946e698bf6b8b572028be9073de WatchSource:0}: Error finding container 8293c849cca053703c12a2d9b01efb3bbbb92946e698bf6b8b572028be9073de: Status 404 returned error can't find the container with id 8293c849cca053703c12a2d9b01efb3bbbb92946e698bf6b8b572028be9073de Dec 06 06:29:08 crc kubenswrapper[4823]: I1206 06:29:08.203239 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jgghf" Dec 06 06:29:08 crc kubenswrapper[4823]: I1206 06:29:08.203296 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jgghf" Dec 06 06:29:08 crc kubenswrapper[4823]: E1206 06:29:08.203920 4823 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.65:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187e8c77594ec384 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-06 06:29:08.203144068 +0000 UTC m=+249.488896028,LastTimestamp:2025-12-06 06:29:08.203144068 +0000 UTC m=+249.488896028,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 06 06:29:08 crc kubenswrapper[4823]: I1206 06:29:08.745585 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8293c849cca053703c12a2d9b01efb3bbbb92946e698bf6b8b572028be9073de"} Dec 06 06:29:09 crc kubenswrapper[4823]: I1206 06:29:09.070514 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vjc84" podUID="dfb88fb7-5645-4804-a359-800d2b14fabe" containerName="registry-server" probeResult="failure" output=< Dec 06 06:29:09 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Dec 06 06:29:09 crc kubenswrapper[4823]: > Dec 06 06:29:09 crc kubenswrapper[4823]: I1206 06:29:09.240890 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jgghf" podUID="125800d8-7679-4574-8992-181928f47efc" containerName="registry-server" probeResult="failure" output=< Dec 06 06:29:09 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Dec 06 06:29:09 crc kubenswrapper[4823]: > Dec 06 06:29:10 crc kubenswrapper[4823]: I1206 06:29:10.759821 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 06 06:29:10 crc kubenswrapper[4823]: I1206 06:29:10.761761 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 06 06:29:10 crc kubenswrapper[4823]: I1206 06:29:10.763264 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8" exitCode=0 Dec 06 06:29:10 crc kubenswrapper[4823]: I1206 06:29:10.763301 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104" exitCode=2 Dec 06 06:29:11 crc kubenswrapper[4823]: E1206 06:29:11.584609 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-conmon-01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0.scope\": RecentStats: unable to find data in memory cache]" Dec 06 06:29:11 crc kubenswrapper[4823]: I1206 06:29:11.769897 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"15fe4520bf4158b939618de43a3ae2bd1beb9e351da90651b23f5055aafb28c3"} Dec 06 06:29:11 crc kubenswrapper[4823]: I1206 06:29:11.772819 4823 generic.go:334] "Generic (PLEG): container finished" podID="e73db1bc-017f-4907-b783-ee164734506e" containerID="d7ea0d301852c9c52a8c391b7e8dc553a4de8255da7c92670d578065483ebe90" exitCode=0 Dec 06 06:29:11 crc kubenswrapper[4823]: I1206 06:29:11.772926 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e73db1bc-017f-4907-b783-ee164734506e","Type":"ContainerDied","Data":"d7ea0d301852c9c52a8c391b7e8dc553a4de8255da7c92670d578065483ebe90"} Dec 06 06:29:11 crc kubenswrapper[4823]: I1206 06:29:11.773908 4823 status_manager.go:851] "Failed to get status for pod" podUID="e73db1bc-017f-4907-b783-ee164734506e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:11 crc kubenswrapper[4823]: I1206 06:29:11.775598 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 06 06:29:11 crc kubenswrapper[4823]: I1206 06:29:11.777120 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 06 06:29:11 crc kubenswrapper[4823]: I1206 06:29:11.777918 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af" exitCode=0 Dec 06 06:29:11 crc kubenswrapper[4823]: I1206 06:29:11.777937 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0" exitCode=0 Dec 06 06:29:11 crc kubenswrapper[4823]: I1206 06:29:11.777947 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d" exitCode=0 Dec 06 06:29:11 crc kubenswrapper[4823]: I1206 06:29:11.777982 4823 scope.go:117] "RemoveContainer" containerID="a87b8802d803cca3dccc3d2a9156e6d1c4786c9431dbd5fb09537943bf5b698a" Dec 06 06:29:12 crc kubenswrapper[4823]: I1206 06:29:12.794051 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 06 06:29:12 crc kubenswrapper[4823]: I1206 06:29:12.796454 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89efdf52133e550b2a4091b81f1fca393f867b4256e98fe8183787a9d0fcf2b9" Dec 06 06:29:12 crc kubenswrapper[4823]: E1206 06:29:12.797102 4823 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 06:29:12 crc kubenswrapper[4823]: I1206 06:29:12.797548 4823 status_manager.go:851] "Failed to get status for pod" podUID="e73db1bc-017f-4907-b783-ee164734506e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:12 crc kubenswrapper[4823]: I1206 06:29:12.800237 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 06 06:29:12 crc kubenswrapper[4823]: I1206 06:29:12.801182 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:29:12 crc kubenswrapper[4823]: I1206 06:29:12.801856 4823 status_manager.go:851] "Failed to get status for pod" podUID="e73db1bc-017f-4907-b783-ee164734506e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:12 crc kubenswrapper[4823]: I1206 06:29:12.802322 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:12 crc kubenswrapper[4823]: I1206 06:29:12.951116 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 06 06:29:12 crc kubenswrapper[4823]: I1206 06:29:12.951206 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:29:12 crc kubenswrapper[4823]: I1206 06:29:12.951278 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 06 06:29:12 crc kubenswrapper[4823]: I1206 06:29:12.951306 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 06 06:29:12 crc kubenswrapper[4823]: I1206 06:29:12.951591 4823 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 06 06:29:12 crc kubenswrapper[4823]: I1206 06:29:12.951651 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:29:12 crc kubenswrapper[4823]: I1206 06:29:12.951714 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:29:13 crc kubenswrapper[4823]: E1206 06:29:13.029514 4823 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:13 crc kubenswrapper[4823]: E1206 06:29:13.030068 4823 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:13 crc kubenswrapper[4823]: E1206 06:29:13.030378 4823 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:13 crc kubenswrapper[4823]: E1206 06:29:13.030972 4823 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:13 crc kubenswrapper[4823]: E1206 06:29:13.031826 4823 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:13 crc kubenswrapper[4823]: I1206 06:29:13.031867 4823 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 06 06:29:13 crc kubenswrapper[4823]: E1206 06:29:13.032172 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="200ms" Dec 06 06:29:13 crc kubenswrapper[4823]: I1206 06:29:13.053212 4823 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 06 06:29:13 crc kubenswrapper[4823]: I1206 06:29:13.053259 4823 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 06 06:29:13 crc kubenswrapper[4823]: I1206 06:29:13.120197 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 06 06:29:13 crc kubenswrapper[4823]: I1206 06:29:13.121109 4823 status_manager.go:851] "Failed to get status for pod" podUID="e73db1bc-017f-4907-b783-ee164734506e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:13 crc kubenswrapper[4823]: I1206 06:29:13.121888 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:13 crc kubenswrapper[4823]: I1206 06:29:13.148534 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 06 06:29:13 crc kubenswrapper[4823]: E1206 06:29:13.233126 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="400ms" Dec 06 06:29:13 crc kubenswrapper[4823]: I1206 06:29:13.255187 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e73db1bc-017f-4907-b783-ee164734506e-kube-api-access\") pod \"e73db1bc-017f-4907-b783-ee164734506e\" (UID: \"e73db1bc-017f-4907-b783-ee164734506e\") " Dec 06 06:29:13 crc kubenswrapper[4823]: I1206 06:29:13.255255 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e73db1bc-017f-4907-b783-ee164734506e-var-lock\") pod \"e73db1bc-017f-4907-b783-ee164734506e\" (UID: \"e73db1bc-017f-4907-b783-ee164734506e\") " Dec 06 06:29:13 crc kubenswrapper[4823]: I1206 06:29:13.255333 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e73db1bc-017f-4907-b783-ee164734506e-kubelet-dir\") pod \"e73db1bc-017f-4907-b783-ee164734506e\" (UID: \"e73db1bc-017f-4907-b783-ee164734506e\") " Dec 06 06:29:13 crc kubenswrapper[4823]: I1206 06:29:13.255653 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e73db1bc-017f-4907-b783-ee164734506e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e73db1bc-017f-4907-b783-ee164734506e" (UID: "e73db1bc-017f-4907-b783-ee164734506e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:29:13 crc kubenswrapper[4823]: I1206 06:29:13.255729 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e73db1bc-017f-4907-b783-ee164734506e-var-lock" (OuterVolumeSpecName: "var-lock") pod "e73db1bc-017f-4907-b783-ee164734506e" (UID: "e73db1bc-017f-4907-b783-ee164734506e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:29:13 crc kubenswrapper[4823]: I1206 06:29:13.261504 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e73db1bc-017f-4907-b783-ee164734506e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e73db1bc-017f-4907-b783-ee164734506e" (UID: "e73db1bc-017f-4907-b783-ee164734506e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:29:13 crc kubenswrapper[4823]: I1206 06:29:13.357138 4823 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e73db1bc-017f-4907-b783-ee164734506e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 06 06:29:13 crc kubenswrapper[4823]: I1206 06:29:13.357174 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e73db1bc-017f-4907-b783-ee164734506e-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 06:29:13 crc kubenswrapper[4823]: I1206 06:29:13.357183 4823 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e73db1bc-017f-4907-b783-ee164734506e-var-lock\") on node \"crc\" DevicePath \"\"" Dec 06 06:29:13 crc kubenswrapper[4823]: E1206 06:29:13.634440 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="800ms" Dec 06 06:29:13 crc kubenswrapper[4823]: I1206 06:29:13.802718 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e73db1bc-017f-4907-b783-ee164734506e","Type":"ContainerDied","Data":"9010f9060bf50d8d21c133755a3847df9e45c91857083e751c4357dad24c7c05"} Dec 06 06:29:13 crc kubenswrapper[4823]: I1206 06:29:13.802754 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:29:13 crc kubenswrapper[4823]: I1206 06:29:13.802758 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 06 06:29:13 crc kubenswrapper[4823]: I1206 06:29:13.802769 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9010f9060bf50d8d21c133755a3847df9e45c91857083e751c4357dad24c7c05" Dec 06 06:29:13 crc kubenswrapper[4823]: I1206 06:29:13.803306 4823 status_manager.go:851] "Failed to get status for pod" podUID="e73db1bc-017f-4907-b783-ee164734506e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:13 crc kubenswrapper[4823]: I1206 06:29:13.804057 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:13 crc kubenswrapper[4823]: I1206 06:29:13.806738 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:13 crc kubenswrapper[4823]: I1206 06:29:13.807920 4823 status_manager.go:851] "Failed to get status for pod" podUID="e73db1bc-017f-4907-b783-ee164734506e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:13 crc kubenswrapper[4823]: I1206 06:29:13.817611 4823 status_manager.go:851] "Failed to get status for pod" podUID="e73db1bc-017f-4907-b783-ee164734506e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:13 crc kubenswrapper[4823]: I1206 06:29:13.818063 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.003219 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gxh4w" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.003271 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gxh4w" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.049738 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gxh4w" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.050365 4823 status_manager.go:851] "Failed to get status for pod" podUID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" pod="openshift-marketplace/certified-operators-gxh4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gxh4w\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.050960 4823 status_manager.go:851] "Failed to get status for pod" podUID="e73db1bc-017f-4907-b783-ee164734506e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.051692 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.228288 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2sgg7" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.228396 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2sgg7" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.272807 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2sgg7" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.273743 4823 status_manager.go:851] "Failed to get status for pod" podUID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" pod="openshift-marketplace/certified-operators-gxh4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gxh4w\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.274309 4823 status_manager.go:851] "Failed to get status for pod" podUID="130f260b-b329-499b-a6ff-b15b96d8bf7d" pod="openshift-marketplace/community-operators-2sgg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sgg7\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.274649 4823 status_manager.go:851] "Failed to get status for pod" podUID="e73db1bc-017f-4907-b783-ee164734506e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.275077 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: E1206 06:29:14.435554 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="1.6s" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.708016 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fqbpf" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.708060 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fqbpf" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.747735 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fqbpf" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.748403 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.748774 4823 status_manager.go:851] "Failed to get status for pod" podUID="130f260b-b329-499b-a6ff-b15b96d8bf7d" pod="openshift-marketplace/community-operators-2sgg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sgg7\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.749038 4823 status_manager.go:851] "Failed to get status for pod" podUID="924b1003-afd5-49e2-883d-12b314c93876" pod="openshift-marketplace/certified-operators-fqbpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fqbpf\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.749457 4823 status_manager.go:851] "Failed to get status for pod" podUID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" pod="openshift-marketplace/certified-operators-gxh4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gxh4w\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.749732 4823 status_manager.go:851] "Failed to get status for pod" podUID="e73db1bc-017f-4907-b783-ee164734506e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.841690 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2sgg7" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.842295 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.842482 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gxh4w" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.842726 4823 status_manager.go:851] "Failed to get status for pod" podUID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" pod="openshift-marketplace/certified-operators-gxh4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gxh4w\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.843016 4823 status_manager.go:851] "Failed to get status for pod" podUID="130f260b-b329-499b-a6ff-b15b96d8bf7d" pod="openshift-marketplace/community-operators-2sgg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sgg7\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.843261 4823 status_manager.go:851] "Failed to get status for pod" podUID="924b1003-afd5-49e2-883d-12b314c93876" pod="openshift-marketplace/certified-operators-fqbpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fqbpf\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.843508 4823 status_manager.go:851] "Failed to get status for pod" podUID="e73db1bc-017f-4907-b783-ee164734506e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.844010 4823 status_manager.go:851] "Failed to get status for pod" podUID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" pod="openshift-marketplace/certified-operators-gxh4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gxh4w\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.844158 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fqbpf" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.844264 4823 status_manager.go:851] "Failed to get status for pod" podUID="130f260b-b329-499b-a6ff-b15b96d8bf7d" pod="openshift-marketplace/community-operators-2sgg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sgg7\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.844548 4823 status_manager.go:851] "Failed to get status for pod" podUID="924b1003-afd5-49e2-883d-12b314c93876" pod="openshift-marketplace/certified-operators-fqbpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fqbpf\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.844777 4823 status_manager.go:851] "Failed to get status for pod" podUID="e73db1bc-017f-4907-b783-ee164734506e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.845001 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.845307 4823 status_manager.go:851] "Failed to get status for pod" podUID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" pod="openshift-marketplace/certified-operators-gxh4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gxh4w\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.845534 4823 status_manager.go:851] "Failed to get status for pod" podUID="130f260b-b329-499b-a6ff-b15b96d8bf7d" pod="openshift-marketplace/community-operators-2sgg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sgg7\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.845780 4823 status_manager.go:851] "Failed to get status for pod" podUID="924b1003-afd5-49e2-883d-12b314c93876" pod="openshift-marketplace/certified-operators-fqbpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fqbpf\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.846058 4823 status_manager.go:851] "Failed to get status for pod" podUID="e73db1bc-017f-4907-b783-ee164734506e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.846304 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.968794 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xx2np" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.969539 4823 status_manager.go:851] "Failed to get status for pod" podUID="e73db1bc-017f-4907-b783-ee164734506e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.969961 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.970841 4823 status_manager.go:851] "Failed to get status for pod" podUID="c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" pod="openshift-marketplace/community-operators-xx2np" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xx2np\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.971570 4823 status_manager.go:851] "Failed to get status for pod" podUID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" pod="openshift-marketplace/certified-operators-gxh4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gxh4w\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.971958 4823 status_manager.go:851] "Failed to get status for pod" podUID="130f260b-b329-499b-a6ff-b15b96d8bf7d" pod="openshift-marketplace/community-operators-2sgg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sgg7\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:14 crc kubenswrapper[4823]: I1206 06:29:14.972353 4823 status_manager.go:851] "Failed to get status for pod" podUID="924b1003-afd5-49e2-883d-12b314c93876" pod="openshift-marketplace/certified-operators-fqbpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fqbpf\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:15 crc kubenswrapper[4823]: E1206 06:29:15.634216 4823 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.65:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187e8c77594ec384 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-06 06:29:08.203144068 +0000 UTC m=+249.488896028,LastTimestamp:2025-12-06 06:29:08.203144068 +0000 UTC m=+249.488896028,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 06 06:29:16 crc kubenswrapper[4823]: E1206 06:29:16.037122 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="3.2s" Dec 06 06:29:17 crc kubenswrapper[4823]: I1206 06:29:17.571136 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jrnhm" Dec 06 06:29:17 crc kubenswrapper[4823]: I1206 06:29:17.572099 4823 status_manager.go:851] "Failed to get status for pod" podUID="ab79175b-ce4b-4ad8-863b-31fe71624804" pod="openshift-marketplace/redhat-marketplace-jrnhm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jrnhm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:17 crc kubenswrapper[4823]: I1206 06:29:17.572622 4823 status_manager.go:851] "Failed to get status for pod" podUID="c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" pod="openshift-marketplace/community-operators-xx2np" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xx2np\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:17 crc kubenswrapper[4823]: I1206 06:29:17.572938 4823 status_manager.go:851] "Failed to get status for pod" podUID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" pod="openshift-marketplace/certified-operators-gxh4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gxh4w\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:17 crc kubenswrapper[4823]: I1206 06:29:17.573249 4823 status_manager.go:851] "Failed to get status for pod" podUID="130f260b-b329-499b-a6ff-b15b96d8bf7d" pod="openshift-marketplace/community-operators-2sgg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sgg7\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:17 crc kubenswrapper[4823]: I1206 06:29:17.573694 4823 status_manager.go:851] "Failed to get status for pod" podUID="924b1003-afd5-49e2-883d-12b314c93876" pod="openshift-marketplace/certified-operators-fqbpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fqbpf\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:17 crc kubenswrapper[4823]: I1206 06:29:17.573992 4823 status_manager.go:851] "Failed to get status for pod" podUID="e73db1bc-017f-4907-b783-ee164734506e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.067053 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vjc84" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.068036 4823 status_manager.go:851] "Failed to get status for pod" podUID="ab79175b-ce4b-4ad8-863b-31fe71624804" pod="openshift-marketplace/redhat-marketplace-jrnhm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jrnhm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.068580 4823 status_manager.go:851] "Failed to get status for pod" podUID="c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" pod="openshift-marketplace/community-operators-xx2np" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xx2np\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.069055 4823 status_manager.go:851] "Failed to get status for pod" podUID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" pod="openshift-marketplace/certified-operators-gxh4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gxh4w\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.069533 4823 status_manager.go:851] "Failed to get status for pod" podUID="130f260b-b329-499b-a6ff-b15b96d8bf7d" pod="openshift-marketplace/community-operators-2sgg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sgg7\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.069807 4823 status_manager.go:851] "Failed to get status for pod" podUID="924b1003-afd5-49e2-883d-12b314c93876" pod="openshift-marketplace/certified-operators-fqbpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fqbpf\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.070010 4823 status_manager.go:851] "Failed to get status for pod" podUID="e73db1bc-017f-4907-b783-ee164734506e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.070209 4823 status_manager.go:851] "Failed to get status for pod" podUID="dfb88fb7-5645-4804-a359-800d2b14fabe" pod="openshift-marketplace/redhat-operators-vjc84" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vjc84\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.108342 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vjc84" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.108892 4823 status_manager.go:851] "Failed to get status for pod" podUID="e73db1bc-017f-4907-b783-ee164734506e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.109120 4823 status_manager.go:851] "Failed to get status for pod" podUID="dfb88fb7-5645-4804-a359-800d2b14fabe" pod="openshift-marketplace/redhat-operators-vjc84" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vjc84\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.109494 4823 status_manager.go:851] "Failed to get status for pod" podUID="ab79175b-ce4b-4ad8-863b-31fe71624804" pod="openshift-marketplace/redhat-marketplace-jrnhm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jrnhm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.109965 4823 status_manager.go:851] "Failed to get status for pod" podUID="c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" pod="openshift-marketplace/community-operators-xx2np" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xx2np\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.110379 4823 status_manager.go:851] "Failed to get status for pod" podUID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" pod="openshift-marketplace/certified-operators-gxh4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gxh4w\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.110692 4823 status_manager.go:851] "Failed to get status for pod" podUID="130f260b-b329-499b-a6ff-b15b96d8bf7d" pod="openshift-marketplace/community-operators-2sgg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sgg7\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.110923 4823 status_manager.go:851] "Failed to get status for pod" podUID="924b1003-afd5-49e2-883d-12b314c93876" pod="openshift-marketplace/certified-operators-fqbpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fqbpf\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.242194 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jgghf" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.243012 4823 status_manager.go:851] "Failed to get status for pod" podUID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" pod="openshift-marketplace/certified-operators-gxh4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gxh4w\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.243342 4823 status_manager.go:851] "Failed to get status for pod" podUID="130f260b-b329-499b-a6ff-b15b96d8bf7d" pod="openshift-marketplace/community-operators-2sgg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sgg7\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.243859 4823 status_manager.go:851] "Failed to get status for pod" podUID="924b1003-afd5-49e2-883d-12b314c93876" pod="openshift-marketplace/certified-operators-fqbpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fqbpf\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.244150 4823 status_manager.go:851] "Failed to get status for pod" podUID="125800d8-7679-4574-8992-181928f47efc" pod="openshift-marketplace/redhat-operators-jgghf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jgghf\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.244307 4823 status_manager.go:851] "Failed to get status for pod" podUID="e73db1bc-017f-4907-b783-ee164734506e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.244801 4823 status_manager.go:851] "Failed to get status for pod" podUID="dfb88fb7-5645-4804-a359-800d2b14fabe" pod="openshift-marketplace/redhat-operators-vjc84" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vjc84\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.245403 4823 status_manager.go:851] "Failed to get status for pod" podUID="ab79175b-ce4b-4ad8-863b-31fe71624804" pod="openshift-marketplace/redhat-marketplace-jrnhm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jrnhm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.245783 4823 status_manager.go:851] "Failed to get status for pod" podUID="c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" pod="openshift-marketplace/community-operators-xx2np" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xx2np\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.284190 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jgghf" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.285495 4823 status_manager.go:851] "Failed to get status for pod" podUID="c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" pod="openshift-marketplace/community-operators-xx2np" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xx2np\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.286160 4823 status_manager.go:851] "Failed to get status for pod" podUID="924b1003-afd5-49e2-883d-12b314c93876" pod="openshift-marketplace/certified-operators-fqbpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fqbpf\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.286842 4823 status_manager.go:851] "Failed to get status for pod" podUID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" pod="openshift-marketplace/certified-operators-gxh4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gxh4w\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.287586 4823 status_manager.go:851] "Failed to get status for pod" podUID="130f260b-b329-499b-a6ff-b15b96d8bf7d" pod="openshift-marketplace/community-operators-2sgg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sgg7\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.287963 4823 status_manager.go:851] "Failed to get status for pod" podUID="125800d8-7679-4574-8992-181928f47efc" pod="openshift-marketplace/redhat-operators-jgghf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jgghf\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.288385 4823 status_manager.go:851] "Failed to get status for pod" podUID="e73db1bc-017f-4907-b783-ee164734506e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.288798 4823 status_manager.go:851] "Failed to get status for pod" podUID="dfb88fb7-5645-4804-a359-800d2b14fabe" pod="openshift-marketplace/redhat-operators-vjc84" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vjc84\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:18 crc kubenswrapper[4823]: I1206 06:29:18.289119 4823 status_manager.go:851] "Failed to get status for pod" podUID="ab79175b-ce4b-4ad8-863b-31fe71624804" pod="openshift-marketplace/redhat-marketplace-jrnhm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jrnhm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:19 crc kubenswrapper[4823]: I1206 06:29:19.143501 4823 status_manager.go:851] "Failed to get status for pod" podUID="ab79175b-ce4b-4ad8-863b-31fe71624804" pod="openshift-marketplace/redhat-marketplace-jrnhm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jrnhm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:19 crc kubenswrapper[4823]: I1206 06:29:19.144333 4823 status_manager.go:851] "Failed to get status for pod" podUID="c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" pod="openshift-marketplace/community-operators-xx2np" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xx2np\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:19 crc kubenswrapper[4823]: I1206 06:29:19.144556 4823 status_manager.go:851] "Failed to get status for pod" podUID="924b1003-afd5-49e2-883d-12b314c93876" pod="openshift-marketplace/certified-operators-fqbpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fqbpf\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:19 crc kubenswrapper[4823]: I1206 06:29:19.144805 4823 status_manager.go:851] "Failed to get status for pod" podUID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" pod="openshift-marketplace/certified-operators-gxh4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gxh4w\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:19 crc kubenswrapper[4823]: I1206 06:29:19.145007 4823 status_manager.go:851] "Failed to get status for pod" podUID="130f260b-b329-499b-a6ff-b15b96d8bf7d" pod="openshift-marketplace/community-operators-2sgg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sgg7\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:19 crc kubenswrapper[4823]: I1206 06:29:19.145208 4823 status_manager.go:851] "Failed to get status for pod" podUID="125800d8-7679-4574-8992-181928f47efc" pod="openshift-marketplace/redhat-operators-jgghf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jgghf\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:19 crc kubenswrapper[4823]: I1206 06:29:19.145620 4823 status_manager.go:851] "Failed to get status for pod" podUID="e73db1bc-017f-4907-b783-ee164734506e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:19 crc kubenswrapper[4823]: I1206 06:29:19.146065 4823 status_manager.go:851] "Failed to get status for pod" podUID="dfb88fb7-5645-4804-a359-800d2b14fabe" pod="openshift-marketplace/redhat-operators-vjc84" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vjc84\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:19 crc kubenswrapper[4823]: E1206 06:29:19.238433 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="6.4s" Dec 06 06:29:20 crc kubenswrapper[4823]: I1206 06:29:20.140259 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:29:20 crc kubenswrapper[4823]: I1206 06:29:20.140975 4823 status_manager.go:851] "Failed to get status for pod" podUID="dfb88fb7-5645-4804-a359-800d2b14fabe" pod="openshift-marketplace/redhat-operators-vjc84" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vjc84\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:20 crc kubenswrapper[4823]: I1206 06:29:20.141321 4823 status_manager.go:851] "Failed to get status for pod" podUID="ab79175b-ce4b-4ad8-863b-31fe71624804" pod="openshift-marketplace/redhat-marketplace-jrnhm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jrnhm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:20 crc kubenswrapper[4823]: I1206 06:29:20.141605 4823 status_manager.go:851] "Failed to get status for pod" podUID="c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" pod="openshift-marketplace/community-operators-xx2np" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xx2np\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:20 crc kubenswrapper[4823]: I1206 06:29:20.141873 4823 status_manager.go:851] "Failed to get status for pod" podUID="130f260b-b329-499b-a6ff-b15b96d8bf7d" pod="openshift-marketplace/community-operators-2sgg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sgg7\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:20 crc kubenswrapper[4823]: I1206 06:29:20.142126 4823 status_manager.go:851] "Failed to get status for pod" podUID="924b1003-afd5-49e2-883d-12b314c93876" pod="openshift-marketplace/certified-operators-fqbpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fqbpf\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:20 crc kubenswrapper[4823]: I1206 06:29:20.142504 4823 status_manager.go:851] "Failed to get status for pod" podUID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" pod="openshift-marketplace/certified-operators-gxh4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gxh4w\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:20 crc kubenswrapper[4823]: I1206 06:29:20.142785 4823 status_manager.go:851] "Failed to get status for pod" podUID="125800d8-7679-4574-8992-181928f47efc" pod="openshift-marketplace/redhat-operators-jgghf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jgghf\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:20 crc kubenswrapper[4823]: I1206 06:29:20.142997 4823 status_manager.go:851] "Failed to get status for pod" podUID="e73db1bc-017f-4907-b783-ee164734506e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:20 crc kubenswrapper[4823]: I1206 06:29:20.156544 4823 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08a8d6f7-1e5f-4fdd-a613-736390c1593f" Dec 06 06:29:20 crc kubenswrapper[4823]: I1206 06:29:20.156592 4823 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08a8d6f7-1e5f-4fdd-a613-736390c1593f" Dec 06 06:29:20 crc kubenswrapper[4823]: E1206 06:29:20.157147 4823 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:29:20 crc kubenswrapper[4823]: I1206 06:29:20.157683 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:29:20 crc kubenswrapper[4823]: W1206 06:29:20.178779 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-f3cfaa3f779ba47e6f4580efc3c72142d19528f944053e9d691147a7ee77e396 WatchSource:0}: Error finding container f3cfaa3f779ba47e6f4580efc3c72142d19528f944053e9d691147a7ee77e396: Status 404 returned error can't find the container with id f3cfaa3f779ba47e6f4580efc3c72142d19528f944053e9d691147a7ee77e396 Dec 06 06:29:20 crc kubenswrapper[4823]: I1206 06:29:20.843531 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f3cfaa3f779ba47e6f4580efc3c72142d19528f944053e9d691147a7ee77e396"} Dec 06 06:29:21 crc kubenswrapper[4823]: I1206 06:29:21.852081 4823 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="298d7ae3c7f9b0e33a8c3adbdb4161b0823f3df6c0a60aa8fb7a6f5faf007d1f" exitCode=0 Dec 06 06:29:21 crc kubenswrapper[4823]: I1206 06:29:21.852174 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"298d7ae3c7f9b0e33a8c3adbdb4161b0823f3df6c0a60aa8fb7a6f5faf007d1f"} Dec 06 06:29:21 crc kubenswrapper[4823]: I1206 06:29:21.852526 4823 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08a8d6f7-1e5f-4fdd-a613-736390c1593f" Dec 06 06:29:21 crc kubenswrapper[4823]: I1206 06:29:21.852558 4823 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08a8d6f7-1e5f-4fdd-a613-736390c1593f" Dec 06 06:29:21 crc kubenswrapper[4823]: E1206 06:29:21.853183 4823 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:29:21 crc kubenswrapper[4823]: I1206 06:29:21.853308 4823 status_manager.go:851] "Failed to get status for pod" podUID="ab79175b-ce4b-4ad8-863b-31fe71624804" pod="openshift-marketplace/redhat-marketplace-jrnhm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jrnhm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:21 crc kubenswrapper[4823]: I1206 06:29:21.854287 4823 status_manager.go:851] "Failed to get status for pod" podUID="c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" pod="openshift-marketplace/community-operators-xx2np" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xx2np\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:21 crc kubenswrapper[4823]: I1206 06:29:21.855127 4823 status_manager.go:851] "Failed to get status for pod" podUID="130f260b-b329-499b-a6ff-b15b96d8bf7d" pod="openshift-marketplace/community-operators-2sgg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sgg7\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:21 crc kubenswrapper[4823]: I1206 06:29:21.856195 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 06 06:29:21 crc kubenswrapper[4823]: I1206 06:29:21.856231 4823 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa" exitCode=1 Dec 06 06:29:21 crc kubenswrapper[4823]: I1206 06:29:21.856256 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa"} Dec 06 06:29:21 crc kubenswrapper[4823]: I1206 06:29:21.856506 4823 status_manager.go:851] "Failed to get status for pod" podUID="924b1003-afd5-49e2-883d-12b314c93876" pod="openshift-marketplace/certified-operators-fqbpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fqbpf\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:21 crc kubenswrapper[4823]: I1206 06:29:21.856581 4823 scope.go:117] "RemoveContainer" containerID="aa094d3c0da82af56fbff7d89a67659a7b71611724862d3fbfcfab18b44a55aa" Dec 06 06:29:21 crc kubenswrapper[4823]: I1206 06:29:21.856906 4823 status_manager.go:851] "Failed to get status for pod" podUID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" pod="openshift-marketplace/certified-operators-gxh4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gxh4w\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:21 crc kubenswrapper[4823]: I1206 06:29:21.857309 4823 status_manager.go:851] "Failed to get status for pod" podUID="125800d8-7679-4574-8992-181928f47efc" pod="openshift-marketplace/redhat-operators-jgghf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jgghf\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:21 crc kubenswrapper[4823]: I1206 06:29:21.857795 4823 status_manager.go:851] "Failed to get status for pod" podUID="e73db1bc-017f-4907-b783-ee164734506e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:21 crc kubenswrapper[4823]: I1206 06:29:21.858114 4823 status_manager.go:851] "Failed to get status for pod" podUID="dfb88fb7-5645-4804-a359-800d2b14fabe" pod="openshift-marketplace/redhat-operators-vjc84" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vjc84\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:21 crc kubenswrapper[4823]: I1206 06:29:21.858612 4823 status_manager.go:851] "Failed to get status for pod" podUID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" pod="openshift-marketplace/certified-operators-gxh4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gxh4w\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:21 crc kubenswrapper[4823]: I1206 06:29:21.860231 4823 status_manager.go:851] "Failed to get status for pod" podUID="130f260b-b329-499b-a6ff-b15b96d8bf7d" pod="openshift-marketplace/community-operators-2sgg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2sgg7\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:21 crc kubenswrapper[4823]: I1206 06:29:21.860634 4823 status_manager.go:851] "Failed to get status for pod" podUID="924b1003-afd5-49e2-883d-12b314c93876" pod="openshift-marketplace/certified-operators-fqbpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fqbpf\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:21 crc kubenswrapper[4823]: I1206 06:29:21.861073 4823 status_manager.go:851] "Failed to get status for pod" podUID="125800d8-7679-4574-8992-181928f47efc" pod="openshift-marketplace/redhat-operators-jgghf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jgghf\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:21 crc kubenswrapper[4823]: I1206 06:29:21.861571 4823 status_manager.go:851] "Failed to get status for pod" podUID="e73db1bc-017f-4907-b783-ee164734506e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:21 crc kubenswrapper[4823]: I1206 06:29:21.861865 4823 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:21 crc kubenswrapper[4823]: I1206 06:29:21.862307 4823 status_manager.go:851] "Failed to get status for pod" podUID="dfb88fb7-5645-4804-a359-800d2b14fabe" pod="openshift-marketplace/redhat-operators-vjc84" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vjc84\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:21 crc kubenswrapper[4823]: I1206 06:29:21.862763 4823 status_manager.go:851] "Failed to get status for pod" podUID="ab79175b-ce4b-4ad8-863b-31fe71624804" pod="openshift-marketplace/redhat-marketplace-jrnhm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jrnhm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:21 crc kubenswrapper[4823]: I1206 06:29:21.863105 4823 status_manager.go:851] "Failed to get status for pod" podUID="c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" pod="openshift-marketplace/community-operators-xx2np" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xx2np\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 06 06:29:22 crc kubenswrapper[4823]: I1206 06:29:22.870979 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"92fb4618f796b0a5ebc750dd4cf1a8f5cadc86787150342e34e0bdf745264d11"} Dec 06 06:29:22 crc kubenswrapper[4823]: I1206 06:29:22.871393 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6c577430178ddb8d853fec3cd5a274a98bcb62395e94171d54629fa095582896"} Dec 06 06:29:22 crc kubenswrapper[4823]: I1206 06:29:22.871408 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0a0904ed5ec10f23acc0ae84fc53c5093f94eb29caa029f8393bdfd02f600d76"} Dec 06 06:29:22 crc kubenswrapper[4823]: I1206 06:29:22.876800 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 06 06:29:22 crc kubenswrapper[4823]: I1206 06:29:22.876843 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fcd1be8888afb70ca64bb33f3ca3d25c0d5e1e753bbbc8d7752c59bb5ea0172f"} Dec 06 06:29:23 crc kubenswrapper[4823]: I1206 06:29:23.680102 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:29:23 crc kubenswrapper[4823]: I1206 06:29:23.885419 4823 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08a8d6f7-1e5f-4fdd-a613-736390c1593f" Dec 06 06:29:23 crc kubenswrapper[4823]: I1206 06:29:23.885451 4823 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08a8d6f7-1e5f-4fdd-a613-736390c1593f" Dec 06 06:29:23 crc kubenswrapper[4823]: I1206 06:29:23.885706 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"25a34bbddfdb6dc10dd109486c1814cc3af11c38e597f98bfae605026d9d88d9"} Dec 06 06:29:23 crc kubenswrapper[4823]: I1206 06:29:23.885732 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"07f1037325007b8047dc91498ad407e224f843e26959afe8323f992bac7986d0"} Dec 06 06:29:23 crc kubenswrapper[4823]: I1206 06:29:23.885764 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:29:25 crc kubenswrapper[4823]: I1206 06:29:25.158362 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:29:25 crc kubenswrapper[4823]: I1206 06:29:25.158741 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:29:25 crc kubenswrapper[4823]: I1206 06:29:25.165917 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:29:25 crc kubenswrapper[4823]: I1206 06:29:25.546993 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:29:25 crc kubenswrapper[4823]: I1206 06:29:25.551102 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:29:28 crc kubenswrapper[4823]: I1206 06:29:28.892275 4823 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:29:28 crc kubenswrapper[4823]: I1206 06:29:28.910630 4823 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08a8d6f7-1e5f-4fdd-a613-736390c1593f" Dec 06 06:29:28 crc kubenswrapper[4823]: I1206 06:29:28.910915 4823 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08a8d6f7-1e5f-4fdd-a613-736390c1593f" Dec 06 06:29:28 crc kubenswrapper[4823]: I1206 06:29:28.917706 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:29:29 crc kubenswrapper[4823]: I1206 06:29:29.163962 4823 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="abd02f40-4202-42c9-9fac-cdf5cbdf835c" Dec 06 06:29:29 crc kubenswrapper[4823]: I1206 06:29:29.917088 4823 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08a8d6f7-1e5f-4fdd-a613-736390c1593f" Dec 06 06:29:29 crc kubenswrapper[4823]: I1206 06:29:29.917443 4823 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08a8d6f7-1e5f-4fdd-a613-736390c1593f" Dec 06 06:29:29 crc kubenswrapper[4823]: I1206 06:29:29.920159 4823 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="abd02f40-4202-42c9-9fac-cdf5cbdf835c" Dec 06 06:29:33 crc kubenswrapper[4823]: I1206 06:29:33.683714 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 06:29:38 crc kubenswrapper[4823]: I1206 06:29:38.116336 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 06 06:29:38 crc kubenswrapper[4823]: I1206 06:29:38.172946 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 06 06:29:38 crc kubenswrapper[4823]: I1206 06:29:38.509614 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 06 06:29:39 crc kubenswrapper[4823]: I1206 06:29:39.293348 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 06 06:29:39 crc kubenswrapper[4823]: I1206 06:29:39.914638 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 06 06:29:40 crc kubenswrapper[4823]: I1206 06:29:40.353208 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 06 06:29:40 crc kubenswrapper[4823]: I1206 06:29:40.400127 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 06 06:29:40 crc kubenswrapper[4823]: I1206 06:29:40.436083 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 06 06:29:40 crc kubenswrapper[4823]: I1206 06:29:40.438124 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 06 06:29:40 crc kubenswrapper[4823]: I1206 06:29:40.479046 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 06 06:29:40 crc kubenswrapper[4823]: I1206 06:29:40.564633 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 06 06:29:40 crc kubenswrapper[4823]: I1206 06:29:40.571502 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 06 06:29:40 crc kubenswrapper[4823]: I1206 06:29:40.891622 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 06 06:29:41 crc kubenswrapper[4823]: I1206 06:29:41.010953 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 06 06:29:41 crc kubenswrapper[4823]: I1206 06:29:41.135762 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 06 06:29:41 crc kubenswrapper[4823]: I1206 06:29:41.147361 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 06 06:29:41 crc kubenswrapper[4823]: I1206 06:29:41.443609 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 06 06:29:41 crc kubenswrapper[4823]: I1206 06:29:41.447067 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 06 06:29:41 crc kubenswrapper[4823]: I1206 06:29:41.461490 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 06 06:29:41 crc kubenswrapper[4823]: I1206 06:29:41.592875 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 06 06:29:41 crc kubenswrapper[4823]: I1206 06:29:41.651298 4823 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 06 06:29:41 crc kubenswrapper[4823]: I1206 06:29:41.830481 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 06 06:29:41 crc kubenswrapper[4823]: I1206 06:29:41.834726 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 06 06:29:41 crc kubenswrapper[4823]: I1206 06:29:41.916034 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 06 06:29:41 crc kubenswrapper[4823]: I1206 06:29:41.988595 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 06 06:29:42 crc kubenswrapper[4823]: I1206 06:29:42.039898 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 06 06:29:42 crc kubenswrapper[4823]: I1206 06:29:42.051273 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 06 06:29:42 crc kubenswrapper[4823]: I1206 06:29:42.051313 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 06 06:29:42 crc kubenswrapper[4823]: I1206 06:29:42.130250 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 06 06:29:42 crc kubenswrapper[4823]: I1206 06:29:42.137963 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 06 06:29:42 crc kubenswrapper[4823]: I1206 06:29:42.143699 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 06 06:29:42 crc kubenswrapper[4823]: I1206 06:29:42.223795 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 06 06:29:42 crc kubenswrapper[4823]: I1206 06:29:42.307050 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 06 06:29:42 crc kubenswrapper[4823]: I1206 06:29:42.342918 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 06 06:29:42 crc kubenswrapper[4823]: I1206 06:29:42.395741 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 06 06:29:42 crc kubenswrapper[4823]: I1206 06:29:42.437331 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 06 06:29:42 crc kubenswrapper[4823]: I1206 06:29:42.448056 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 06 06:29:42 crc kubenswrapper[4823]: I1206 06:29:42.464270 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 06 06:29:42 crc kubenswrapper[4823]: I1206 06:29:42.575629 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 06 06:29:42 crc kubenswrapper[4823]: I1206 06:29:42.627447 4823 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 06 06:29:42 crc kubenswrapper[4823]: I1206 06:29:42.877564 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 06 06:29:42 crc kubenswrapper[4823]: I1206 06:29:42.889886 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 06 06:29:42 crc kubenswrapper[4823]: I1206 06:29:42.929844 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 06 06:29:42 crc kubenswrapper[4823]: I1206 06:29:42.964795 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 06 06:29:42 crc kubenswrapper[4823]: I1206 06:29:42.997140 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 06 06:29:43 crc kubenswrapper[4823]: I1206 06:29:43.191234 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 06 06:29:43 crc kubenswrapper[4823]: I1206 06:29:43.207888 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 06 06:29:43 crc kubenswrapper[4823]: I1206 06:29:43.361735 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 06 06:29:43 crc kubenswrapper[4823]: I1206 06:29:43.436952 4823 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 06 06:29:43 crc kubenswrapper[4823]: I1206 06:29:43.479113 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 06 06:29:43 crc kubenswrapper[4823]: I1206 06:29:43.503591 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 06 06:29:43 crc kubenswrapper[4823]: I1206 06:29:43.511864 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 06 06:29:43 crc kubenswrapper[4823]: I1206 06:29:43.630296 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 06 06:29:43 crc kubenswrapper[4823]: I1206 06:29:43.645344 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 06 06:29:43 crc kubenswrapper[4823]: I1206 06:29:43.656829 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 06 06:29:43 crc kubenswrapper[4823]: I1206 06:29:43.723974 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 06 06:29:43 crc kubenswrapper[4823]: I1206 06:29:43.726653 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 06 06:29:43 crc kubenswrapper[4823]: I1206 06:29:43.784350 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 06 06:29:43 crc kubenswrapper[4823]: I1206 06:29:43.790478 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 06 06:29:43 crc kubenswrapper[4823]: I1206 06:29:43.850926 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 06 06:29:43 crc kubenswrapper[4823]: I1206 06:29:43.872547 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 06 06:29:43 crc kubenswrapper[4823]: I1206 06:29:43.919035 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 06 06:29:43 crc kubenswrapper[4823]: I1206 06:29:43.978224 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 06 06:29:44 crc kubenswrapper[4823]: I1206 06:29:44.015608 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 06 06:29:44 crc kubenswrapper[4823]: I1206 06:29:44.016603 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 06 06:29:44 crc kubenswrapper[4823]: I1206 06:29:44.124538 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 06 06:29:44 crc kubenswrapper[4823]: I1206 06:29:44.131603 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 06 06:29:44 crc kubenswrapper[4823]: I1206 06:29:44.151146 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 06 06:29:44 crc kubenswrapper[4823]: I1206 06:29:44.260984 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 06 06:29:44 crc kubenswrapper[4823]: I1206 06:29:44.301955 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 06 06:29:44 crc kubenswrapper[4823]: I1206 06:29:44.594597 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 06 06:29:44 crc kubenswrapper[4823]: I1206 06:29:44.665650 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 06 06:29:44 crc kubenswrapper[4823]: I1206 06:29:44.666831 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 06 06:29:44 crc kubenswrapper[4823]: I1206 06:29:44.708870 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 06 06:29:44 crc kubenswrapper[4823]: I1206 06:29:44.758238 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 06 06:29:44 crc kubenswrapper[4823]: I1206 06:29:44.817315 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 06 06:29:44 crc kubenswrapper[4823]: I1206 06:29:44.909562 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 06 06:29:44 crc kubenswrapper[4823]: I1206 06:29:44.939749 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 06 06:29:44 crc kubenswrapper[4823]: I1206 06:29:44.976073 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.006166 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.018855 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.061804 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.082937 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.129374 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.144239 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.160265 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.200094 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.221008 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.226764 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.258243 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.338859 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.339711 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.368280 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.407014 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.425714 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.427327 4823 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.445170 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.507857 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.516004 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.517464 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.580056 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.765907 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.807076 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.888114 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.888240 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 06 06:29:45 crc kubenswrapper[4823]: I1206 06:29:45.915750 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 06 06:29:46 crc kubenswrapper[4823]: I1206 06:29:46.060578 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 06 06:29:46 crc kubenswrapper[4823]: I1206 06:29:46.186401 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 06 06:29:46 crc kubenswrapper[4823]: I1206 06:29:46.279242 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 06 06:29:46 crc kubenswrapper[4823]: I1206 06:29:46.416331 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 06 06:29:46 crc kubenswrapper[4823]: I1206 06:29:46.653334 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 06 06:29:46 crc kubenswrapper[4823]: I1206 06:29:46.666226 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 06 06:29:46 crc kubenswrapper[4823]: I1206 06:29:46.675786 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 06 06:29:46 crc kubenswrapper[4823]: I1206 06:29:46.686100 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 06 06:29:46 crc kubenswrapper[4823]: I1206 06:29:46.832605 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 06 06:29:46 crc kubenswrapper[4823]: I1206 06:29:46.886703 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 06 06:29:46 crc kubenswrapper[4823]: I1206 06:29:46.943434 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 06 06:29:46 crc kubenswrapper[4823]: I1206 06:29:46.970634 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 06 06:29:47 crc kubenswrapper[4823]: I1206 06:29:47.003135 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 06 06:29:47 crc kubenswrapper[4823]: I1206 06:29:47.184788 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 06 06:29:47 crc kubenswrapper[4823]: I1206 06:29:47.274366 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 06 06:29:47 crc kubenswrapper[4823]: I1206 06:29:47.315994 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 06 06:29:47 crc kubenswrapper[4823]: I1206 06:29:47.424502 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 06 06:29:47 crc kubenswrapper[4823]: I1206 06:29:47.530681 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 06 06:29:47 crc kubenswrapper[4823]: I1206 06:29:47.530931 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 06 06:29:47 crc kubenswrapper[4823]: I1206 06:29:47.565252 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 06 06:29:47 crc kubenswrapper[4823]: I1206 06:29:47.601749 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 06 06:29:47 crc kubenswrapper[4823]: I1206 06:29:47.609814 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 06 06:29:47 crc kubenswrapper[4823]: I1206 06:29:47.620260 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 06 06:29:47 crc kubenswrapper[4823]: I1206 06:29:47.686531 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 06 06:29:47 crc kubenswrapper[4823]: I1206 06:29:47.700225 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 06 06:29:47 crc kubenswrapper[4823]: I1206 06:29:47.797770 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 06 06:29:47 crc kubenswrapper[4823]: I1206 06:29:47.837730 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 06 06:29:47 crc kubenswrapper[4823]: I1206 06:29:47.838067 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 06 06:29:47 crc kubenswrapper[4823]: I1206 06:29:47.873808 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 06 06:29:47 crc kubenswrapper[4823]: I1206 06:29:47.943216 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 06 06:29:47 crc kubenswrapper[4823]: I1206 06:29:47.994083 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 06 06:29:48 crc kubenswrapper[4823]: I1206 06:29:48.039742 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 06 06:29:48 crc kubenswrapper[4823]: I1206 06:29:48.156072 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 06 06:29:48 crc kubenswrapper[4823]: I1206 06:29:48.175677 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 06 06:29:48 crc kubenswrapper[4823]: I1206 06:29:48.283979 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 06 06:29:48 crc kubenswrapper[4823]: I1206 06:29:48.334157 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 06 06:29:48 crc kubenswrapper[4823]: I1206 06:29:48.462447 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 06 06:29:48 crc kubenswrapper[4823]: I1206 06:29:48.485603 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 06 06:29:48 crc kubenswrapper[4823]: I1206 06:29:48.504245 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 06 06:29:48 crc kubenswrapper[4823]: I1206 06:29:48.585287 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 06 06:29:48 crc kubenswrapper[4823]: I1206 06:29:48.710492 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 06 06:29:48 crc kubenswrapper[4823]: I1206 06:29:48.757881 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 06 06:29:48 crc kubenswrapper[4823]: I1206 06:29:48.916807 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 06 06:29:48 crc kubenswrapper[4823]: I1206 06:29:48.921964 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 06 06:29:48 crc kubenswrapper[4823]: I1206 06:29:48.930252 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 06 06:29:48 crc kubenswrapper[4823]: I1206 06:29:48.967408 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 06 06:29:48 crc kubenswrapper[4823]: I1206 06:29:48.998838 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 06 06:29:49 crc kubenswrapper[4823]: I1206 06:29:49.162108 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 06 06:29:49 crc kubenswrapper[4823]: I1206 06:29:49.205630 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 06 06:29:49 crc kubenswrapper[4823]: I1206 06:29:49.228750 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 06 06:29:49 crc kubenswrapper[4823]: I1206 06:29:49.327529 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 06 06:29:49 crc kubenswrapper[4823]: I1206 06:29:49.345110 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 06 06:29:49 crc kubenswrapper[4823]: I1206 06:29:49.371902 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 06 06:29:49 crc kubenswrapper[4823]: I1206 06:29:49.420888 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 06 06:29:49 crc kubenswrapper[4823]: I1206 06:29:49.456583 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 06 06:29:49 crc kubenswrapper[4823]: I1206 06:29:49.475843 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 06 06:29:49 crc kubenswrapper[4823]: I1206 06:29:49.530411 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 06 06:29:49 crc kubenswrapper[4823]: I1206 06:29:49.569263 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 06 06:29:49 crc kubenswrapper[4823]: I1206 06:29:49.594133 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 06 06:29:49 crc kubenswrapper[4823]: I1206 06:29:49.649557 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 06 06:29:49 crc kubenswrapper[4823]: I1206 06:29:49.706700 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 06 06:29:49 crc kubenswrapper[4823]: I1206 06:29:49.772416 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 06 06:29:49 crc kubenswrapper[4823]: I1206 06:29:49.776715 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 06 06:29:49 crc kubenswrapper[4823]: I1206 06:29:49.794464 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 06 06:29:49 crc kubenswrapper[4823]: I1206 06:29:49.827167 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 06 06:29:49 crc kubenswrapper[4823]: I1206 06:29:49.851587 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 06 06:29:49 crc kubenswrapper[4823]: I1206 06:29:49.853644 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 06 06:29:49 crc kubenswrapper[4823]: I1206 06:29:49.903912 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 06 06:29:49 crc kubenswrapper[4823]: I1206 06:29:49.916532 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 06 06:29:49 crc kubenswrapper[4823]: I1206 06:29:49.968676 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 06 06:29:50 crc kubenswrapper[4823]: I1206 06:29:50.061460 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 06 06:29:50 crc kubenswrapper[4823]: I1206 06:29:50.061652 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 06 06:29:50 crc kubenswrapper[4823]: I1206 06:29:50.155996 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 06 06:29:50 crc kubenswrapper[4823]: I1206 06:29:50.176455 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 06 06:29:50 crc kubenswrapper[4823]: I1206 06:29:50.259432 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 06 06:29:50 crc kubenswrapper[4823]: I1206 06:29:50.263006 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 06 06:29:50 crc kubenswrapper[4823]: I1206 06:29:50.335967 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 06 06:29:50 crc kubenswrapper[4823]: I1206 06:29:50.418726 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 06 06:29:50 crc kubenswrapper[4823]: I1206 06:29:50.623128 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 06 06:29:50 crc kubenswrapper[4823]: I1206 06:29:50.706177 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 06 06:29:50 crc kubenswrapper[4823]: I1206 06:29:50.779082 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 06 06:29:50 crc kubenswrapper[4823]: I1206 06:29:50.781139 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 06 06:29:50 crc kubenswrapper[4823]: I1206 06:29:50.808517 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 06 06:29:50 crc kubenswrapper[4823]: I1206 06:29:50.815922 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 06 06:29:50 crc kubenswrapper[4823]: I1206 06:29:50.907050 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 06 06:29:50 crc kubenswrapper[4823]: I1206 06:29:50.922563 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 06 06:29:50 crc kubenswrapper[4823]: I1206 06:29:50.930875 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 06 06:29:50 crc kubenswrapper[4823]: I1206 06:29:50.931001 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 06 06:29:50 crc kubenswrapper[4823]: I1206 06:29:50.966308 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 06 06:29:50 crc kubenswrapper[4823]: I1206 06:29:50.997798 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 06 06:29:51 crc kubenswrapper[4823]: I1206 06:29:51.152640 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 06 06:29:51 crc kubenswrapper[4823]: I1206 06:29:51.162764 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 06 06:29:51 crc kubenswrapper[4823]: I1206 06:29:51.242117 4823 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 06 06:29:51 crc kubenswrapper[4823]: I1206 06:29:51.259009 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 06 06:29:51 crc kubenswrapper[4823]: I1206 06:29:51.264699 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 06 06:29:51 crc kubenswrapper[4823]: I1206 06:29:51.269821 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 06 06:29:51 crc kubenswrapper[4823]: I1206 06:29:51.284569 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 06 06:29:51 crc kubenswrapper[4823]: I1206 06:29:51.308657 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 06 06:29:51 crc kubenswrapper[4823]: I1206 06:29:51.315944 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 06 06:29:51 crc kubenswrapper[4823]: I1206 06:29:51.351483 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 06 06:29:51 crc kubenswrapper[4823]: I1206 06:29:51.380763 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 06 06:29:51 crc kubenswrapper[4823]: I1206 06:29:51.397122 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 06 06:29:51 crc kubenswrapper[4823]: I1206 06:29:51.419059 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 06 06:29:51 crc kubenswrapper[4823]: I1206 06:29:51.427959 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 06 06:29:51 crc kubenswrapper[4823]: I1206 06:29:51.448024 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 06 06:29:51 crc kubenswrapper[4823]: I1206 06:29:51.462024 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 06 06:29:51 crc kubenswrapper[4823]: I1206 06:29:51.465444 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 06 06:29:51 crc kubenswrapper[4823]: I1206 06:29:51.728971 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 06 06:29:51 crc kubenswrapper[4823]: I1206 06:29:51.729779 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 06 06:29:51 crc kubenswrapper[4823]: I1206 06:29:51.895180 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 06 06:29:51 crc kubenswrapper[4823]: I1206 06:29:51.983490 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 06 06:29:52 crc kubenswrapper[4823]: I1206 06:29:52.039273 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 06 06:29:52 crc kubenswrapper[4823]: I1206 06:29:52.042116 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 06 06:29:52 crc kubenswrapper[4823]: I1206 06:29:52.072910 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 06 06:29:52 crc kubenswrapper[4823]: I1206 06:29:52.137259 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 06 06:29:52 crc kubenswrapper[4823]: I1206 06:29:52.147564 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 06 06:29:52 crc kubenswrapper[4823]: I1206 06:29:52.253249 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 06 06:29:52 crc kubenswrapper[4823]: I1206 06:29:52.285073 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 06 06:29:52 crc kubenswrapper[4823]: I1206 06:29:52.337423 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 06 06:29:52 crc kubenswrapper[4823]: I1206 06:29:52.346732 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 06 06:29:52 crc kubenswrapper[4823]: I1206 06:29:52.381868 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 06 06:29:52 crc kubenswrapper[4823]: I1206 06:29:52.440038 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 06 06:29:52 crc kubenswrapper[4823]: I1206 06:29:52.450332 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 06 06:29:52 crc kubenswrapper[4823]: I1206 06:29:52.458876 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 06 06:29:52 crc kubenswrapper[4823]: I1206 06:29:52.526806 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 06 06:29:52 crc kubenswrapper[4823]: I1206 06:29:52.643007 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 06 06:29:52 crc kubenswrapper[4823]: I1206 06:29:52.690547 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 06 06:29:52 crc kubenswrapper[4823]: I1206 06:29:52.916432 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 06 06:29:52 crc kubenswrapper[4823]: I1206 06:29:52.941226 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 06 06:29:53 crc kubenswrapper[4823]: I1206 06:29:53.053061 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 06 06:29:53 crc kubenswrapper[4823]: I1206 06:29:53.090058 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 06 06:29:53 crc kubenswrapper[4823]: I1206 06:29:53.123636 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 06 06:29:53 crc kubenswrapper[4823]: I1206 06:29:53.131007 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 06 06:29:53 crc kubenswrapper[4823]: I1206 06:29:53.392416 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 06 06:29:53 crc kubenswrapper[4823]: I1206 06:29:53.454821 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 06 06:29:53 crc kubenswrapper[4823]: I1206 06:29:53.647115 4823 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 06 06:29:53 crc kubenswrapper[4823]: I1206 06:29:53.653433 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 06 06:29:53 crc kubenswrapper[4823]: I1206 06:29:53.653487 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 06 06:29:53 crc kubenswrapper[4823]: I1206 06:29:53.657445 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 06:29:53 crc kubenswrapper[4823]: I1206 06:29:53.678195 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=25.67817046 podStartE2EDuration="25.67817046s" podCreationTimestamp="2025-12-06 06:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:29:53.669833056 +0000 UTC m=+294.955585026" watchObservedRunningTime="2025-12-06 06:29:53.67817046 +0000 UTC m=+294.963922420" Dec 06 06:29:53 crc kubenswrapper[4823]: I1206 06:29:53.809652 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 06 06:29:54 crc kubenswrapper[4823]: I1206 06:29:54.028908 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 06 06:29:54 crc kubenswrapper[4823]: I1206 06:29:54.116806 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 06 06:29:54 crc kubenswrapper[4823]: I1206 06:29:54.303437 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 06 06:29:54 crc kubenswrapper[4823]: I1206 06:29:54.616037 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 06 06:29:54 crc kubenswrapper[4823]: I1206 06:29:54.951877 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 06 06:29:55 crc kubenswrapper[4823]: I1206 06:29:55.122809 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 06 06:29:55 crc kubenswrapper[4823]: I1206 06:29:55.373685 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 06 06:29:56 crc kubenswrapper[4823]: I1206 06:29:56.368178 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 06 06:29:57 crc kubenswrapper[4823]: I1206 06:29:57.521138 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 06 06:30:00 crc kubenswrapper[4823]: I1206 06:30:00.172264 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416710-2mtlh"] Dec 06 06:30:00 crc kubenswrapper[4823]: E1206 06:30:00.173171 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e73db1bc-017f-4907-b783-ee164734506e" containerName="installer" Dec 06 06:30:00 crc kubenswrapper[4823]: I1206 06:30:00.173193 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e73db1bc-017f-4907-b783-ee164734506e" containerName="installer" Dec 06 06:30:00 crc kubenswrapper[4823]: I1206 06:30:00.173331 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="e73db1bc-017f-4907-b783-ee164734506e" containerName="installer" Dec 06 06:30:00 crc kubenswrapper[4823]: I1206 06:30:00.173747 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-2mtlh" Dec 06 06:30:00 crc kubenswrapper[4823]: I1206 06:30:00.177946 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 06:30:00 crc kubenswrapper[4823]: I1206 06:30:00.178085 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 06:30:00 crc kubenswrapper[4823]: I1206 06:30:00.182558 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416710-2mtlh"] Dec 06 06:30:00 crc kubenswrapper[4823]: I1206 06:30:00.270879 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e75b29f-2105-4ab0-9bc8-102729c188d2-config-volume\") pod \"collect-profiles-29416710-2mtlh\" (UID: \"7e75b29f-2105-4ab0-9bc8-102729c188d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-2mtlh" Dec 06 06:30:00 crc kubenswrapper[4823]: I1206 06:30:00.270951 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gctsv\" (UniqueName: \"kubernetes.io/projected/7e75b29f-2105-4ab0-9bc8-102729c188d2-kube-api-access-gctsv\") pod \"collect-profiles-29416710-2mtlh\" (UID: \"7e75b29f-2105-4ab0-9bc8-102729c188d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-2mtlh" Dec 06 06:30:00 crc kubenswrapper[4823]: I1206 06:30:00.271006 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e75b29f-2105-4ab0-9bc8-102729c188d2-secret-volume\") pod \"collect-profiles-29416710-2mtlh\" (UID: \"7e75b29f-2105-4ab0-9bc8-102729c188d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-2mtlh" Dec 06 06:30:00 crc kubenswrapper[4823]: I1206 06:30:00.372913 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e75b29f-2105-4ab0-9bc8-102729c188d2-config-volume\") pod \"collect-profiles-29416710-2mtlh\" (UID: \"7e75b29f-2105-4ab0-9bc8-102729c188d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-2mtlh" Dec 06 06:30:00 crc kubenswrapper[4823]: I1206 06:30:00.372995 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gctsv\" (UniqueName: \"kubernetes.io/projected/7e75b29f-2105-4ab0-9bc8-102729c188d2-kube-api-access-gctsv\") pod \"collect-profiles-29416710-2mtlh\" (UID: \"7e75b29f-2105-4ab0-9bc8-102729c188d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-2mtlh" Dec 06 06:30:00 crc kubenswrapper[4823]: I1206 06:30:00.373045 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e75b29f-2105-4ab0-9bc8-102729c188d2-secret-volume\") pod \"collect-profiles-29416710-2mtlh\" (UID: \"7e75b29f-2105-4ab0-9bc8-102729c188d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-2mtlh" Dec 06 06:30:00 crc kubenswrapper[4823]: I1206 06:30:00.374307 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e75b29f-2105-4ab0-9bc8-102729c188d2-config-volume\") pod \"collect-profiles-29416710-2mtlh\" (UID: \"7e75b29f-2105-4ab0-9bc8-102729c188d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-2mtlh" Dec 06 06:30:00 crc kubenswrapper[4823]: I1206 06:30:00.390624 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gctsv\" (UniqueName: \"kubernetes.io/projected/7e75b29f-2105-4ab0-9bc8-102729c188d2-kube-api-access-gctsv\") pod \"collect-profiles-29416710-2mtlh\" (UID: \"7e75b29f-2105-4ab0-9bc8-102729c188d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-2mtlh" Dec 06 06:30:00 crc kubenswrapper[4823]: I1206 06:30:00.401582 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e75b29f-2105-4ab0-9bc8-102729c188d2-secret-volume\") pod \"collect-profiles-29416710-2mtlh\" (UID: \"7e75b29f-2105-4ab0-9bc8-102729c188d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-2mtlh" Dec 06 06:30:00 crc kubenswrapper[4823]: I1206 06:30:00.492508 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-2mtlh" Dec 06 06:30:00 crc kubenswrapper[4823]: I1206 06:30:00.702312 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416710-2mtlh"] Dec 06 06:30:01 crc kubenswrapper[4823]: I1206 06:30:01.097049 4823 generic.go:334] "Generic (PLEG): container finished" podID="7e75b29f-2105-4ab0-9bc8-102729c188d2" containerID="e8fc561ffe45e69fcb164c671db1dc35b0a28e2b3946e242c6bd64754b0d9d74" exitCode=0 Dec 06 06:30:01 crc kubenswrapper[4823]: I1206 06:30:01.097602 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-2mtlh" event={"ID":"7e75b29f-2105-4ab0-9bc8-102729c188d2","Type":"ContainerDied","Data":"e8fc561ffe45e69fcb164c671db1dc35b0a28e2b3946e242c6bd64754b0d9d74"} Dec 06 06:30:01 crc kubenswrapper[4823]: I1206 06:30:01.097640 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-2mtlh" event={"ID":"7e75b29f-2105-4ab0-9bc8-102729c188d2","Type":"ContainerStarted","Data":"702ac5aec3dbbd85f336aa9c58668f06bf99027d9845c986335d1c61a84d6d14"} Dec 06 06:30:01 crc kubenswrapper[4823]: I1206 06:30:01.853468 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fqbpf"] Dec 06 06:30:01 crc kubenswrapper[4823]: I1206 06:30:01.855118 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fqbpf" podUID="924b1003-afd5-49e2-883d-12b314c93876" containerName="registry-server" containerID="cri-o://f654a12fd043a64c62e8bad1e81393dcf67e75dc15f1ba18b6a6f3b14d0c366e" gracePeriod=30 Dec 06 06:30:01 crc kubenswrapper[4823]: I1206 06:30:01.867006 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gxh4w"] Dec 06 06:30:01 crc kubenswrapper[4823]: I1206 06:30:01.867476 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gxh4w" podUID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" containerName="registry-server" containerID="cri-o://77bb5d28a309e3beaa27ea2f770567dee0c7dcf761b84621da489174ec39b279" gracePeriod=30 Dec 06 06:30:01 crc kubenswrapper[4823]: I1206 06:30:01.874693 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2sgg7"] Dec 06 06:30:01 crc kubenswrapper[4823]: I1206 06:30:01.874932 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2sgg7" podUID="130f260b-b329-499b-a6ff-b15b96d8bf7d" containerName="registry-server" containerID="cri-o://5fc767d024b5ba187a9afb38a539e696307eacf30aa01afc02e629deafc7d815" gracePeriod=30 Dec 06 06:30:01 crc kubenswrapper[4823]: I1206 06:30:01.881078 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xx2np"] Dec 06 06:30:01 crc kubenswrapper[4823]: I1206 06:30:01.881402 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xx2np" podUID="c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" containerName="registry-server" containerID="cri-o://9d3c9a7253672fd5bff021c5538709eb0e0ff5876d6cc45672e2b71e1bfcf502" gracePeriod=30 Dec 06 06:30:01 crc kubenswrapper[4823]: I1206 06:30:01.891232 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2cjj5"] Dec 06 06:30:01 crc kubenswrapper[4823]: I1206 06:30:01.891540 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-2cjj5" podUID="53a18f23-f29e-43ba-8568-855cb4550b7b" containerName="marketplace-operator" containerID="cri-o://ca4111359d12dbd3b574d410ddf9859b88173b7564fb391bab9403faceacb154" gracePeriod=30 Dec 06 06:30:01 crc kubenswrapper[4823]: I1206 06:30:01.896557 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrnhm"] Dec 06 06:30:01 crc kubenswrapper[4823]: I1206 06:30:01.896874 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jrnhm" podUID="ab79175b-ce4b-4ad8-863b-31fe71624804" containerName="registry-server" containerID="cri-o://9faf02b61affc5d5d01789ec4d165945787d93fbec81571cc00cdb583263f46a" gracePeriod=30 Dec 06 06:30:01 crc kubenswrapper[4823]: I1206 06:30:01.922615 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-px8wk"] Dec 06 06:30:01 crc kubenswrapper[4823]: I1206 06:30:01.922927 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-px8wk" podUID="6edd27de-5a66-4fbb-ac77-6889ff93d1b4" containerName="registry-server" containerID="cri-o://3862aa8df309141eb51d164ea790c024b9b4d76002ba74f9f681c19c89706608" gracePeriod=30 Dec 06 06:30:01 crc kubenswrapper[4823]: I1206 06:30:01.926247 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jgghf"] Dec 06 06:30:01 crc kubenswrapper[4823]: I1206 06:30:01.926561 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jgghf" podUID="125800d8-7679-4574-8992-181928f47efc" containerName="registry-server" containerID="cri-o://1a3955f2dec87d57d871245ed95cbc964516943188246f89e95a97aab0ebadb2" gracePeriod=30 Dec 06 06:30:01 crc kubenswrapper[4823]: I1206 06:30:01.937000 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-969f9"] Dec 06 06:30:01 crc kubenswrapper[4823]: I1206 06:30:01.937968 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-969f9" Dec 06 06:30:01 crc kubenswrapper[4823]: I1206 06:30:01.941066 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vjc84"] Dec 06 06:30:01 crc kubenswrapper[4823]: I1206 06:30:01.941358 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vjc84" podUID="dfb88fb7-5645-4804-a359-800d2b14fabe" containerName="registry-server" containerID="cri-o://7d69131582340276a6fbc8af0585609689b346c16efc3b432ee650466861d3d4" gracePeriod=30 Dec 06 06:30:01 crc kubenswrapper[4823]: I1206 06:30:01.951741 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-969f9"] Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.000267 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c529f398-1c3e-4a7c-a46f-d57d2f588b9c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-969f9\" (UID: \"c529f398-1c3e-4a7c-a46f-d57d2f588b9c\") " pod="openshift-marketplace/marketplace-operator-79b997595-969f9" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.000454 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsx6q\" (UniqueName: \"kubernetes.io/projected/c529f398-1c3e-4a7c-a46f-d57d2f588b9c-kube-api-access-vsx6q\") pod \"marketplace-operator-79b997595-969f9\" (UID: \"c529f398-1c3e-4a7c-a46f-d57d2f588b9c\") " pod="openshift-marketplace/marketplace-operator-79b997595-969f9" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.000502 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c529f398-1c3e-4a7c-a46f-d57d2f588b9c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-969f9\" (UID: \"c529f398-1c3e-4a7c-a46f-d57d2f588b9c\") " pod="openshift-marketplace/marketplace-operator-79b997595-969f9" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.102915 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c529f398-1c3e-4a7c-a46f-d57d2f588b9c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-969f9\" (UID: \"c529f398-1c3e-4a7c-a46f-d57d2f588b9c\") " pod="openshift-marketplace/marketplace-operator-79b997595-969f9" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.103293 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsx6q\" (UniqueName: \"kubernetes.io/projected/c529f398-1c3e-4a7c-a46f-d57d2f588b9c-kube-api-access-vsx6q\") pod \"marketplace-operator-79b997595-969f9\" (UID: \"c529f398-1c3e-4a7c-a46f-d57d2f588b9c\") " pod="openshift-marketplace/marketplace-operator-79b997595-969f9" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.103332 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c529f398-1c3e-4a7c-a46f-d57d2f588b9c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-969f9\" (UID: \"c529f398-1c3e-4a7c-a46f-d57d2f588b9c\") " pod="openshift-marketplace/marketplace-operator-79b997595-969f9" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.104794 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c529f398-1c3e-4a7c-a46f-d57d2f588b9c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-969f9\" (UID: \"c529f398-1c3e-4a7c-a46f-d57d2f588b9c\") " pod="openshift-marketplace/marketplace-operator-79b997595-969f9" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.113653 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c529f398-1c3e-4a7c-a46f-d57d2f588b9c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-969f9\" (UID: \"c529f398-1c3e-4a7c-a46f-d57d2f588b9c\") " pod="openshift-marketplace/marketplace-operator-79b997595-969f9" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.127814 4823 generic.go:334] "Generic (PLEG): container finished" podID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" containerID="77bb5d28a309e3beaa27ea2f770567dee0c7dcf761b84621da489174ec39b279" exitCode=0 Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.127962 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxh4w" event={"ID":"6ade1bd9-4ca5-4910-8989-09b55a67bd0e","Type":"ContainerDied","Data":"77bb5d28a309e3beaa27ea2f770567dee0c7dcf761b84621da489174ec39b279"} Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.128309 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsx6q\" (UniqueName: \"kubernetes.io/projected/c529f398-1c3e-4a7c-a46f-d57d2f588b9c-kube-api-access-vsx6q\") pod \"marketplace-operator-79b997595-969f9\" (UID: \"c529f398-1c3e-4a7c-a46f-d57d2f588b9c\") " pod="openshift-marketplace/marketplace-operator-79b997595-969f9" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.146890 4823 generic.go:334] "Generic (PLEG): container finished" podID="125800d8-7679-4574-8992-181928f47efc" containerID="1a3955f2dec87d57d871245ed95cbc964516943188246f89e95a97aab0ebadb2" exitCode=0 Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.146984 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgghf" event={"ID":"125800d8-7679-4574-8992-181928f47efc","Type":"ContainerDied","Data":"1a3955f2dec87d57d871245ed95cbc964516943188246f89e95a97aab0ebadb2"} Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.156853 4823 generic.go:334] "Generic (PLEG): container finished" podID="6edd27de-5a66-4fbb-ac77-6889ff93d1b4" containerID="3862aa8df309141eb51d164ea790c024b9b4d76002ba74f9f681c19c89706608" exitCode=0 Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.156937 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-px8wk" event={"ID":"6edd27de-5a66-4fbb-ac77-6889ff93d1b4","Type":"ContainerDied","Data":"3862aa8df309141eb51d164ea790c024b9b4d76002ba74f9f681c19c89706608"} Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.180916 4823 generic.go:334] "Generic (PLEG): container finished" podID="c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" containerID="9d3c9a7253672fd5bff021c5538709eb0e0ff5876d6cc45672e2b71e1bfcf502" exitCode=0 Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.181007 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xx2np" event={"ID":"c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1","Type":"ContainerDied","Data":"9d3c9a7253672fd5bff021c5538709eb0e0ff5876d6cc45672e2b71e1bfcf502"} Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.183716 4823 generic.go:334] "Generic (PLEG): container finished" podID="ab79175b-ce4b-4ad8-863b-31fe71624804" containerID="9faf02b61affc5d5d01789ec4d165945787d93fbec81571cc00cdb583263f46a" exitCode=0 Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.183834 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrnhm" event={"ID":"ab79175b-ce4b-4ad8-863b-31fe71624804","Type":"ContainerDied","Data":"9faf02b61affc5d5d01789ec4d165945787d93fbec81571cc00cdb583263f46a"} Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.194419 4823 generic.go:334] "Generic (PLEG): container finished" podID="130f260b-b329-499b-a6ff-b15b96d8bf7d" containerID="5fc767d024b5ba187a9afb38a539e696307eacf30aa01afc02e629deafc7d815" exitCode=0 Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.194501 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sgg7" event={"ID":"130f260b-b329-499b-a6ff-b15b96d8bf7d","Type":"ContainerDied","Data":"5fc767d024b5ba187a9afb38a539e696307eacf30aa01afc02e629deafc7d815"} Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.204445 4823 generic.go:334] "Generic (PLEG): container finished" podID="53a18f23-f29e-43ba-8568-855cb4550b7b" containerID="ca4111359d12dbd3b574d410ddf9859b88173b7564fb391bab9403faceacb154" exitCode=0 Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.204517 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2cjj5" event={"ID":"53a18f23-f29e-43ba-8568-855cb4550b7b","Type":"ContainerDied","Data":"ca4111359d12dbd3b574d410ddf9859b88173b7564fb391bab9403faceacb154"} Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.210157 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vjc84" event={"ID":"dfb88fb7-5645-4804-a359-800d2b14fabe","Type":"ContainerDied","Data":"7d69131582340276a6fbc8af0585609689b346c16efc3b432ee650466861d3d4"} Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.210152 4823 generic.go:334] "Generic (PLEG): container finished" podID="dfb88fb7-5645-4804-a359-800d2b14fabe" containerID="7d69131582340276a6fbc8af0585609689b346c16efc3b432ee650466861d3d4" exitCode=0 Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.213428 4823 generic.go:334] "Generic (PLEG): container finished" podID="924b1003-afd5-49e2-883d-12b314c93876" containerID="f654a12fd043a64c62e8bad1e81393dcf67e75dc15f1ba18b6a6f3b14d0c366e" exitCode=0 Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.213720 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqbpf" event={"ID":"924b1003-afd5-49e2-883d-12b314c93876","Type":"ContainerDied","Data":"f654a12fd043a64c62e8bad1e81393dcf67e75dc15f1ba18b6a6f3b14d0c366e"} Dec 06 06:30:02 crc kubenswrapper[4823]: E1206 06:30:02.309244 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod125800d8_7679_4574_8992_181928f47efc.slice/crio-1a3955f2dec87d57d871245ed95cbc964516943188246f89e95a97aab0ebadb2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod125800d8_7679_4574_8992_181928f47efc.slice/crio-conmon-1a3955f2dec87d57d871245ed95cbc964516943188246f89e95a97aab0ebadb2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5a8777c_f9f5_4a33_9dc8_93b93edc6fa1.slice/crio-9d3c9a7253672fd5bff021c5538709eb0e0ff5876d6cc45672e2b71e1bfcf502.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53a18f23_f29e_43ba_8568_855cb4550b7b.slice/crio-conmon-ca4111359d12dbd3b574d410ddf9859b88173b7564fb391bab9403faceacb154.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5a8777c_f9f5_4a33_9dc8_93b93edc6fa1.slice/crio-conmon-9d3c9a7253672fd5bff021c5538709eb0e0ff5876d6cc45672e2b71e1bfcf502.scope\": RecentStats: unable to find data in memory cache]" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.569181 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-969f9" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.573146 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxh4w" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.599466 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2sgg7" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.647117 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fqbpf" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.657184 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jgghf" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.669222 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xx2np" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.683979 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2cjj5" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.713010 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ade1bd9-4ca5-4910-8989-09b55a67bd0e-catalog-content\") pod \"6ade1bd9-4ca5-4910-8989-09b55a67bd0e\" (UID: \"6ade1bd9-4ca5-4910-8989-09b55a67bd0e\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.713459 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/130f260b-b329-499b-a6ff-b15b96d8bf7d-catalog-content\") pod \"130f260b-b329-499b-a6ff-b15b96d8bf7d\" (UID: \"130f260b-b329-499b-a6ff-b15b96d8bf7d\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.713491 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/130f260b-b329-499b-a6ff-b15b96d8bf7d-utilities\") pod \"130f260b-b329-499b-a6ff-b15b96d8bf7d\" (UID: \"130f260b-b329-499b-a6ff-b15b96d8bf7d\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.713539 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vxqs\" (UniqueName: \"kubernetes.io/projected/130f260b-b329-499b-a6ff-b15b96d8bf7d-kube-api-access-8vxqs\") pod \"130f260b-b329-499b-a6ff-b15b96d8bf7d\" (UID: \"130f260b-b329-499b-a6ff-b15b96d8bf7d\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.713585 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ade1bd9-4ca5-4910-8989-09b55a67bd0e-utilities\") pod \"6ade1bd9-4ca5-4910-8989-09b55a67bd0e\" (UID: \"6ade1bd9-4ca5-4910-8989-09b55a67bd0e\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.713608 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/125800d8-7679-4574-8992-181928f47efc-catalog-content\") pod \"125800d8-7679-4574-8992-181928f47efc\" (UID: \"125800d8-7679-4574-8992-181928f47efc\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.713796 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/924b1003-afd5-49e2-883d-12b314c93876-catalog-content\") pod \"924b1003-afd5-49e2-883d-12b314c93876\" (UID: \"924b1003-afd5-49e2-883d-12b314c93876\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.713838 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb99t\" (UniqueName: \"kubernetes.io/projected/125800d8-7679-4574-8992-181928f47efc-kube-api-access-qb99t\") pod \"125800d8-7679-4574-8992-181928f47efc\" (UID: \"125800d8-7679-4574-8992-181928f47efc\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.713891 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzcsx\" (UniqueName: \"kubernetes.io/projected/6ade1bd9-4ca5-4910-8989-09b55a67bd0e-kube-api-access-nzcsx\") pod \"6ade1bd9-4ca5-4910-8989-09b55a67bd0e\" (UID: \"6ade1bd9-4ca5-4910-8989-09b55a67bd0e\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.713917 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/924b1003-afd5-49e2-883d-12b314c93876-utilities\") pod \"924b1003-afd5-49e2-883d-12b314c93876\" (UID: \"924b1003-afd5-49e2-883d-12b314c93876\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.713946 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/125800d8-7679-4574-8992-181928f47efc-utilities\") pod \"125800d8-7679-4574-8992-181928f47efc\" (UID: \"125800d8-7679-4574-8992-181928f47efc\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.714000 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmkcx\" (UniqueName: \"kubernetes.io/projected/924b1003-afd5-49e2-883d-12b314c93876-kube-api-access-mmkcx\") pod \"924b1003-afd5-49e2-883d-12b314c93876\" (UID: \"924b1003-afd5-49e2-883d-12b314c93876\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.717826 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/125800d8-7679-4574-8992-181928f47efc-utilities" (OuterVolumeSpecName: "utilities") pod "125800d8-7679-4574-8992-181928f47efc" (UID: "125800d8-7679-4574-8992-181928f47efc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.718289 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/130f260b-b329-499b-a6ff-b15b96d8bf7d-utilities" (OuterVolumeSpecName: "utilities") pod "130f260b-b329-499b-a6ff-b15b96d8bf7d" (UID: "130f260b-b329-499b-a6ff-b15b96d8bf7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.718445 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/924b1003-afd5-49e2-883d-12b314c93876-utilities" (OuterVolumeSpecName: "utilities") pod "924b1003-afd5-49e2-883d-12b314c93876" (UID: "924b1003-afd5-49e2-883d-12b314c93876"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.720966 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ade1bd9-4ca5-4910-8989-09b55a67bd0e-utilities" (OuterVolumeSpecName: "utilities") pod "6ade1bd9-4ca5-4910-8989-09b55a67bd0e" (UID: "6ade1bd9-4ca5-4910-8989-09b55a67bd0e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.721233 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/130f260b-b329-499b-a6ff-b15b96d8bf7d-kube-api-access-8vxqs" (OuterVolumeSpecName: "kube-api-access-8vxqs") pod "130f260b-b329-499b-a6ff-b15b96d8bf7d" (UID: "130f260b-b329-499b-a6ff-b15b96d8bf7d"). InnerVolumeSpecName "kube-api-access-8vxqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.722531 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/924b1003-afd5-49e2-883d-12b314c93876-kube-api-access-mmkcx" (OuterVolumeSpecName: "kube-api-access-mmkcx") pod "924b1003-afd5-49e2-883d-12b314c93876" (UID: "924b1003-afd5-49e2-883d-12b314c93876"). InnerVolumeSpecName "kube-api-access-mmkcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.722854 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/125800d8-7679-4574-8992-181928f47efc-kube-api-access-qb99t" (OuterVolumeSpecName: "kube-api-access-qb99t") pod "125800d8-7679-4574-8992-181928f47efc" (UID: "125800d8-7679-4574-8992-181928f47efc"). InnerVolumeSpecName "kube-api-access-qb99t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.724813 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ade1bd9-4ca5-4910-8989-09b55a67bd0e-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.727093 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb99t\" (UniqueName: \"kubernetes.io/projected/125800d8-7679-4574-8992-181928f47efc-kube-api-access-qb99t\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.727132 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/924b1003-afd5-49e2-883d-12b314c93876-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.727144 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/125800d8-7679-4574-8992-181928f47efc-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.727171 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmkcx\" (UniqueName: \"kubernetes.io/projected/924b1003-afd5-49e2-883d-12b314c93876-kube-api-access-mmkcx\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.727190 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/130f260b-b329-499b-a6ff-b15b96d8bf7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.727286 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vxqs\" (UniqueName: \"kubernetes.io/projected/130f260b-b329-499b-a6ff-b15b96d8bf7d-kube-api-access-8vxqs\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.730354 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-px8wk" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.747178 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrnhm" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.748527 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ade1bd9-4ca5-4910-8989-09b55a67bd0e-kube-api-access-nzcsx" (OuterVolumeSpecName: "kube-api-access-nzcsx") pod "6ade1bd9-4ca5-4910-8989-09b55a67bd0e" (UID: "6ade1bd9-4ca5-4910-8989-09b55a67bd0e"). InnerVolumeSpecName "kube-api-access-nzcsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.759799 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-2mtlh" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.770422 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vjc84" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.815409 4823 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.815772 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://15fe4520bf4158b939618de43a3ae2bd1beb9e351da90651b23f5055aafb28c3" gracePeriod=5 Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.828553 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cpg6\" (UniqueName: \"kubernetes.io/projected/53a18f23-f29e-43ba-8568-855cb4550b7b-kube-api-access-7cpg6\") pod \"53a18f23-f29e-43ba-8568-855cb4550b7b\" (UID: \"53a18f23-f29e-43ba-8568-855cb4550b7b\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.828615 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2786k\" (UniqueName: \"kubernetes.io/projected/ab79175b-ce4b-4ad8-863b-31fe71624804-kube-api-access-2786k\") pod \"ab79175b-ce4b-4ad8-863b-31fe71624804\" (UID: \"ab79175b-ce4b-4ad8-863b-31fe71624804\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.828648 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab79175b-ce4b-4ad8-863b-31fe71624804-catalog-content\") pod \"ab79175b-ce4b-4ad8-863b-31fe71624804\" (UID: \"ab79175b-ce4b-4ad8-863b-31fe71624804\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.828694 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6edd27de-5a66-4fbb-ac77-6889ff93d1b4-catalog-content\") pod \"6edd27de-5a66-4fbb-ac77-6889ff93d1b4\" (UID: \"6edd27de-5a66-4fbb-ac77-6889ff93d1b4\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.828723 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e75b29f-2105-4ab0-9bc8-102729c188d2-secret-volume\") pod \"7e75b29f-2105-4ab0-9bc8-102729c188d2\" (UID: \"7e75b29f-2105-4ab0-9bc8-102729c188d2\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.828749 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53a18f23-f29e-43ba-8568-855cb4550b7b-marketplace-trusted-ca\") pod \"53a18f23-f29e-43ba-8568-855cb4550b7b\" (UID: \"53a18f23-f29e-43ba-8568-855cb4550b7b\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.828793 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1-catalog-content\") pod \"c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1\" (UID: \"c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.828818 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab79175b-ce4b-4ad8-863b-31fe71624804-utilities\") pod \"ab79175b-ce4b-4ad8-863b-31fe71624804\" (UID: \"ab79175b-ce4b-4ad8-863b-31fe71624804\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.831042 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/53a18f23-f29e-43ba-8568-855cb4550b7b-marketplace-operator-metrics\") pod \"53a18f23-f29e-43ba-8568-855cb4550b7b\" (UID: \"53a18f23-f29e-43ba-8568-855cb4550b7b\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.831141 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrhwf\" (UniqueName: \"kubernetes.io/projected/c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1-kube-api-access-lrhwf\") pod \"c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1\" (UID: \"c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.831204 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gctsv\" (UniqueName: \"kubernetes.io/projected/7e75b29f-2105-4ab0-9bc8-102729c188d2-kube-api-access-gctsv\") pod \"7e75b29f-2105-4ab0-9bc8-102729c188d2\" (UID: \"7e75b29f-2105-4ab0-9bc8-102729c188d2\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.831243 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e75b29f-2105-4ab0-9bc8-102729c188d2-config-volume\") pod \"7e75b29f-2105-4ab0-9bc8-102729c188d2\" (UID: \"7e75b29f-2105-4ab0-9bc8-102729c188d2\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.831283 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1-utilities\") pod \"c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1\" (UID: \"c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.831304 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2crgs\" (UniqueName: \"kubernetes.io/projected/6edd27de-5a66-4fbb-ac77-6889ff93d1b4-kube-api-access-2crgs\") pod \"6edd27de-5a66-4fbb-ac77-6889ff93d1b4\" (UID: \"6edd27de-5a66-4fbb-ac77-6889ff93d1b4\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.831329 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6edd27de-5a66-4fbb-ac77-6889ff93d1b4-utilities\") pod \"6edd27de-5a66-4fbb-ac77-6889ff93d1b4\" (UID: \"6edd27de-5a66-4fbb-ac77-6889ff93d1b4\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.831601 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzcsx\" (UniqueName: \"kubernetes.io/projected/6ade1bd9-4ca5-4910-8989-09b55a67bd0e-kube-api-access-nzcsx\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.836151 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53a18f23-f29e-43ba-8568-855cb4550b7b-kube-api-access-7cpg6" (OuterVolumeSpecName: "kube-api-access-7cpg6") pod "53a18f23-f29e-43ba-8568-855cb4550b7b" (UID: "53a18f23-f29e-43ba-8568-855cb4550b7b"). InnerVolumeSpecName "kube-api-access-7cpg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.836791 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6edd27de-5a66-4fbb-ac77-6889ff93d1b4-utilities" (OuterVolumeSpecName: "utilities") pod "6edd27de-5a66-4fbb-ac77-6889ff93d1b4" (UID: "6edd27de-5a66-4fbb-ac77-6889ff93d1b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.841628 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e75b29f-2105-4ab0-9bc8-102729c188d2-config-volume" (OuterVolumeSpecName: "config-volume") pod "7e75b29f-2105-4ab0-9bc8-102729c188d2" (UID: "7e75b29f-2105-4ab0-9bc8-102729c188d2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.842327 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ade1bd9-4ca5-4910-8989-09b55a67bd0e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ade1bd9-4ca5-4910-8989-09b55a67bd0e" (UID: "6ade1bd9-4ca5-4910-8989-09b55a67bd0e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.842505 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab79175b-ce4b-4ad8-863b-31fe71624804-kube-api-access-2786k" (OuterVolumeSpecName: "kube-api-access-2786k") pod "ab79175b-ce4b-4ad8-863b-31fe71624804" (UID: "ab79175b-ce4b-4ad8-863b-31fe71624804"). InnerVolumeSpecName "kube-api-access-2786k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.845777 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab79175b-ce4b-4ad8-863b-31fe71624804-utilities" (OuterVolumeSpecName: "utilities") pod "ab79175b-ce4b-4ad8-863b-31fe71624804" (UID: "ab79175b-ce4b-4ad8-863b-31fe71624804"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.846716 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/924b1003-afd5-49e2-883d-12b314c93876-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "924b1003-afd5-49e2-883d-12b314c93876" (UID: "924b1003-afd5-49e2-883d-12b314c93876"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.846737 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53a18f23-f29e-43ba-8568-855cb4550b7b-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "53a18f23-f29e-43ba-8568-855cb4550b7b" (UID: "53a18f23-f29e-43ba-8568-855cb4550b7b"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.849417 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edd27de-5a66-4fbb-ac77-6889ff93d1b4-kube-api-access-2crgs" (OuterVolumeSpecName: "kube-api-access-2crgs") pod "6edd27de-5a66-4fbb-ac77-6889ff93d1b4" (UID: "6edd27de-5a66-4fbb-ac77-6889ff93d1b4"). InnerVolumeSpecName "kube-api-access-2crgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.850821 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e75b29f-2105-4ab0-9bc8-102729c188d2-kube-api-access-gctsv" (OuterVolumeSpecName: "kube-api-access-gctsv") pod "7e75b29f-2105-4ab0-9bc8-102729c188d2" (UID: "7e75b29f-2105-4ab0-9bc8-102729c188d2"). InnerVolumeSpecName "kube-api-access-gctsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.852010 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1-kube-api-access-lrhwf" (OuterVolumeSpecName: "kube-api-access-lrhwf") pod "c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" (UID: "c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1"). InnerVolumeSpecName "kube-api-access-lrhwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.853976 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53a18f23-f29e-43ba-8568-855cb4550b7b-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "53a18f23-f29e-43ba-8568-855cb4550b7b" (UID: "53a18f23-f29e-43ba-8568-855cb4550b7b"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.854134 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1-utilities" (OuterVolumeSpecName: "utilities") pod "c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" (UID: "c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.856152 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e75b29f-2105-4ab0-9bc8-102729c188d2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7e75b29f-2105-4ab0-9bc8-102729c188d2" (UID: "7e75b29f-2105-4ab0-9bc8-102729c188d2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.866547 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab79175b-ce4b-4ad8-863b-31fe71624804-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab79175b-ce4b-4ad8-863b-31fe71624804" (UID: "ab79175b-ce4b-4ad8-863b-31fe71624804"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.875082 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/130f260b-b329-499b-a6ff-b15b96d8bf7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "130f260b-b329-499b-a6ff-b15b96d8bf7d" (UID: "130f260b-b329-499b-a6ff-b15b96d8bf7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.877142 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6edd27de-5a66-4fbb-ac77-6889ff93d1b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6edd27de-5a66-4fbb-ac77-6889ff93d1b4" (UID: "6edd27de-5a66-4fbb-ac77-6889ff93d1b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: W1206 06:30:02.887291 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc529f398_1c3e_4a7c_a46f_d57d2f588b9c.slice/crio-6be58f9c03e160135d9c6f8d291b52cffcadb959f753de9355915a01ceed047d WatchSource:0}: Error finding container 6be58f9c03e160135d9c6f8d291b52cffcadb959f753de9355915a01ceed047d: Status 404 returned error can't find the container with id 6be58f9c03e160135d9c6f8d291b52cffcadb959f753de9355915a01ceed047d Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.904815 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" (UID: "c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.907845 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-969f9"] Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.932907 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m6tx\" (UniqueName: \"kubernetes.io/projected/dfb88fb7-5645-4804-a359-800d2b14fabe-kube-api-access-5m6tx\") pod \"dfb88fb7-5645-4804-a359-800d2b14fabe\" (UID: \"dfb88fb7-5645-4804-a359-800d2b14fabe\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.933059 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfb88fb7-5645-4804-a359-800d2b14fabe-utilities\") pod \"dfb88fb7-5645-4804-a359-800d2b14fabe\" (UID: \"dfb88fb7-5645-4804-a359-800d2b14fabe\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.933115 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfb88fb7-5645-4804-a359-800d2b14fabe-catalog-content\") pod \"dfb88fb7-5645-4804-a359-800d2b14fabe\" (UID: \"dfb88fb7-5645-4804-a359-800d2b14fabe\") " Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.933330 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.933354 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab79175b-ce4b-4ad8-863b-31fe71624804-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.933363 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/130f260b-b329-499b-a6ff-b15b96d8bf7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.933373 4823 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/53a18f23-f29e-43ba-8568-855cb4550b7b-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.933385 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrhwf\" (UniqueName: \"kubernetes.io/projected/c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1-kube-api-access-lrhwf\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.933395 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/924b1003-afd5-49e2-883d-12b314c93876-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.933405 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gctsv\" (UniqueName: \"kubernetes.io/projected/7e75b29f-2105-4ab0-9bc8-102729c188d2-kube-api-access-gctsv\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.933414 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e75b29f-2105-4ab0-9bc8-102729c188d2-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.933422 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.933430 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2crgs\" (UniqueName: \"kubernetes.io/projected/6edd27de-5a66-4fbb-ac77-6889ff93d1b4-kube-api-access-2crgs\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.933440 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6edd27de-5a66-4fbb-ac77-6889ff93d1b4-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.933449 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cpg6\" (UniqueName: \"kubernetes.io/projected/53a18f23-f29e-43ba-8568-855cb4550b7b-kube-api-access-7cpg6\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.933460 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2786k\" (UniqueName: \"kubernetes.io/projected/ab79175b-ce4b-4ad8-863b-31fe71624804-kube-api-access-2786k\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.933469 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab79175b-ce4b-4ad8-863b-31fe71624804-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.933477 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6edd27de-5a66-4fbb-ac77-6889ff93d1b4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.933485 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e75b29f-2105-4ab0-9bc8-102729c188d2-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.933493 4823 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53a18f23-f29e-43ba-8568-855cb4550b7b-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.933502 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ade1bd9-4ca5-4910-8989-09b55a67bd0e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.936209 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfb88fb7-5645-4804-a359-800d2b14fabe-utilities" (OuterVolumeSpecName: "utilities") pod "dfb88fb7-5645-4804-a359-800d2b14fabe" (UID: "dfb88fb7-5645-4804-a359-800d2b14fabe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.937865 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfb88fb7-5645-4804-a359-800d2b14fabe-kube-api-access-5m6tx" (OuterVolumeSpecName: "kube-api-access-5m6tx") pod "dfb88fb7-5645-4804-a359-800d2b14fabe" (UID: "dfb88fb7-5645-4804-a359-800d2b14fabe"). InnerVolumeSpecName "kube-api-access-5m6tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:30:02 crc kubenswrapper[4823]: I1206 06:30:02.956185 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/125800d8-7679-4574-8992-181928f47efc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "125800d8-7679-4574-8992-181928f47efc" (UID: "125800d8-7679-4574-8992-181928f47efc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.034268 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfb88fb7-5645-4804-a359-800d2b14fabe-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.034417 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m6tx\" (UniqueName: \"kubernetes.io/projected/dfb88fb7-5645-4804-a359-800d2b14fabe-kube-api-access-5m6tx\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.034693 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/125800d8-7679-4574-8992-181928f47efc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.050389 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfb88fb7-5645-4804-a359-800d2b14fabe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dfb88fb7-5645-4804-a359-800d2b14fabe" (UID: "dfb88fb7-5645-4804-a359-800d2b14fabe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.136376 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfb88fb7-5645-4804-a359-800d2b14fabe-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.228398 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-px8wk" event={"ID":"6edd27de-5a66-4fbb-ac77-6889ff93d1b4","Type":"ContainerDied","Data":"d1b2bbee2dc63d56575b10d65db4fddf8a5a515620b638c93afbf555de310075"} Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.228487 4823 scope.go:117] "RemoveContainer" containerID="3862aa8df309141eb51d164ea790c024b9b4d76002ba74f9f681c19c89706608" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.228636 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-px8wk" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.238325 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fqbpf" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.238881 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqbpf" event={"ID":"924b1003-afd5-49e2-883d-12b314c93876","Type":"ContainerDied","Data":"a2218be1f3b868e78c5fee81e6a97c73bffc9d0861a1180bf17f4e49c9c15008"} Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.243254 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrnhm" event={"ID":"ab79175b-ce4b-4ad8-863b-31fe71624804","Type":"ContainerDied","Data":"2662b60d3f057dcb3a871a2f0db47aba9f9c78401d34cfd3e80cf12af825bb00"} Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.243435 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrnhm" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.251021 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2sgg7" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.251953 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sgg7" event={"ID":"130f260b-b329-499b-a6ff-b15b96d8bf7d","Type":"ContainerDied","Data":"83f9c51ddd79895c7f90b0b24a4a64d715187500e987e7eaf8fba0d48f9acc18"} Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.254722 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-px8wk"] Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.255217 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2cjj5" event={"ID":"53a18f23-f29e-43ba-8568-855cb4550b7b","Type":"ContainerDied","Data":"5151646f3ee75f326c07c901c17bf43d448523955a014bab9bfb68970f26cb25"} Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.256177 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2cjj5" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.256913 4823 scope.go:117] "RemoveContainer" containerID="609edb194d7a69422930d3f463c1862b7bd8032d9ab1ba6c3fedb3bc7d0b496e" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.263977 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-px8wk"] Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.265459 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jgghf" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.265459 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgghf" event={"ID":"125800d8-7679-4574-8992-181928f47efc","Type":"ContainerDied","Data":"a4fe80ac9b650217f9f02ca316561cecf39d9e72c2f09f770d47d4c6eaf91f2d"} Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.268131 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-2mtlh" event={"ID":"7e75b29f-2105-4ab0-9bc8-102729c188d2","Type":"ContainerDied","Data":"702ac5aec3dbbd85f336aa9c58668f06bf99027d9845c986335d1c61a84d6d14"} Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.268388 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="702ac5aec3dbbd85f336aa9c58668f06bf99027d9845c986335d1c61a84d6d14" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.268382 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-2mtlh" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.274581 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xx2np" event={"ID":"c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1","Type":"ContainerDied","Data":"22083de0bbc67447e54910ff361ef6043ffb2d2100349ed45158633dc4ece786"} Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.274710 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xx2np" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.279324 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxh4w" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.279411 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxh4w" event={"ID":"6ade1bd9-4ca5-4910-8989-09b55a67bd0e","Type":"ContainerDied","Data":"79b45fb3821a7a1a539e57cdbd91b65b24541938b5b426ee59a5e721ef8a9f4c"} Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.282134 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-969f9" event={"ID":"c529f398-1c3e-4a7c-a46f-d57d2f588b9c","Type":"ContainerStarted","Data":"4feb692e1f6deb30ab8e0437bd723fe31830a93e37d9c6fc3c8c48d813877f54"} Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.282187 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-969f9" event={"ID":"c529f398-1c3e-4a7c-a46f-d57d2f588b9c","Type":"ContainerStarted","Data":"6be58f9c03e160135d9c6f8d291b52cffcadb959f753de9355915a01ceed047d"} Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.282825 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-969f9" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.286731 4823 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-969f9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.58:8080/healthz\": dial tcp 10.217.0.58:8080: connect: connection refused" start-of-body= Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.286788 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-969f9" podUID="c529f398-1c3e-4a7c-a46f-d57d2f588b9c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.58:8080/healthz\": dial tcp 10.217.0.58:8080: connect: connection refused" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.287484 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrnhm"] Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.288016 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vjc84" event={"ID":"dfb88fb7-5645-4804-a359-800d2b14fabe","Type":"ContainerDied","Data":"71bf6c9afbf478b0aa5ffda6fb16e0fa258be059656478907cf4692e1e5b2a1d"} Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.288240 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vjc84" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.293167 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrnhm"] Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.301645 4823 scope.go:117] "RemoveContainer" containerID="1ff12bfcc8a37fffdb53d9bb32af8f9d7d285514786232d6d04a6e9679ed2b44" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.320368 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2sgg7"] Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.325573 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2sgg7"] Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.330225 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fqbpf"] Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.336638 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fqbpf"] Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.340057 4823 scope.go:117] "RemoveContainer" containerID="f654a12fd043a64c62e8bad1e81393dcf67e75dc15f1ba18b6a6f3b14d0c366e" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.340365 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xx2np"] Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.347180 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xx2np"] Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.349050 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-969f9" podStartSLOduration=2.3490244860000002 podStartE2EDuration="2.349024486s" podCreationTimestamp="2025-12-06 06:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:30:03.332278185 +0000 UTC m=+304.618030155" watchObservedRunningTime="2025-12-06 06:30:03.349024486 +0000 UTC m=+304.634776476" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.355838 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gxh4w"] Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.358018 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gxh4w"] Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.364233 4823 scope.go:117] "RemoveContainer" containerID="799773be94fe68c09b32f784c50c6bbeb0cf65cf70be459bf50ab3d70b711668" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.364264 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jgghf"] Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.373231 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jgghf"] Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.378907 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vjc84"] Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.384013 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vjc84"] Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.393067 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2cjj5"] Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.396999 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2cjj5"] Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.401943 4823 scope.go:117] "RemoveContainer" containerID="4a0b529b84dbff6f27c4ac57b2243760759c470d9cec14e54af716a1a9a52c22" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.441540 4823 scope.go:117] "RemoveContainer" containerID="9faf02b61affc5d5d01789ec4d165945787d93fbec81571cc00cdb583263f46a" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.465210 4823 scope.go:117] "RemoveContainer" containerID="b7cb5eccb68e738073c2ccc7d5f98b988ff0f35033d6cf47e8f5e94101124ac2" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.480673 4823 scope.go:117] "RemoveContainer" containerID="05d839d7047ae75b274457e02d22cbe1c6622642f640e5d604c204a86d86401a" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.501304 4823 scope.go:117] "RemoveContainer" containerID="5fc767d024b5ba187a9afb38a539e696307eacf30aa01afc02e629deafc7d815" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.520697 4823 scope.go:117] "RemoveContainer" containerID="c70d0fe9c62f96814df3dce5d2a65a3fb0346c1551ee3091433158d65b57a812" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.543489 4823 scope.go:117] "RemoveContainer" containerID="3d88a3196f43ac4857a976954211b27ebbab989f5b853f461e90a0e79a9e2f90" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.564095 4823 scope.go:117] "RemoveContainer" containerID="ca4111359d12dbd3b574d410ddf9859b88173b7564fb391bab9403faceacb154" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.580814 4823 scope.go:117] "RemoveContainer" containerID="1a3955f2dec87d57d871245ed95cbc964516943188246f89e95a97aab0ebadb2" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.596888 4823 scope.go:117] "RemoveContainer" containerID="da16365e8795c221345d3a64e638e5bde6f1d7c8d77bb1b2522aab723ad06cb5" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.614477 4823 scope.go:117] "RemoveContainer" containerID="f512251dddca7bd986cfe6e2ce15a71fb43cb729e7382def0360b71ba37316e1" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.628817 4823 scope.go:117] "RemoveContainer" containerID="9d3c9a7253672fd5bff021c5538709eb0e0ff5876d6cc45672e2b71e1bfcf502" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.649052 4823 scope.go:117] "RemoveContainer" containerID="b390fef45b312792540aa825b3142bc437b49d33b2b0946abcb74474b8fbb7ed" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.674601 4823 scope.go:117] "RemoveContainer" containerID="2dc0d68e6214e5b282fa5cde765c9b3b8cf53b447e0c185cca0995e57ecf49dd" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.693707 4823 scope.go:117] "RemoveContainer" containerID="77bb5d28a309e3beaa27ea2f770567dee0c7dcf761b84621da489174ec39b279" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.711061 4823 scope.go:117] "RemoveContainer" containerID="4226e67bd5d5851f8ea0e6f4524cba53a16daf6e764efe88470df0809d730a58" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.728672 4823 scope.go:117] "RemoveContainer" containerID="af21c8e9a3740a3bc0bb6e6a2068b3db5d70bd3b4fcab1ca606660efd88c86a0" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.747543 4823 scope.go:117] "RemoveContainer" containerID="7d69131582340276a6fbc8af0585609689b346c16efc3b432ee650466861d3d4" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.761387 4823 scope.go:117] "RemoveContainer" containerID="e0c8b22dc08e446d454717b7ebca09c488e1e7c9c65d0929655fe12233f56703" Dec 06 06:30:03 crc kubenswrapper[4823]: I1206 06:30:03.779037 4823 scope.go:117] "RemoveContainer" containerID="02c706c6536e936b43edea8497dbd5e4b71834259696710a1df91b974a583cd9" Dec 06 06:30:04 crc kubenswrapper[4823]: I1206 06:30:04.305811 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-969f9" Dec 06 06:30:05 crc kubenswrapper[4823]: I1206 06:30:05.147969 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="125800d8-7679-4574-8992-181928f47efc" path="/var/lib/kubelet/pods/125800d8-7679-4574-8992-181928f47efc/volumes" Dec 06 06:30:05 crc kubenswrapper[4823]: I1206 06:30:05.149018 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="130f260b-b329-499b-a6ff-b15b96d8bf7d" path="/var/lib/kubelet/pods/130f260b-b329-499b-a6ff-b15b96d8bf7d/volumes" Dec 06 06:30:05 crc kubenswrapper[4823]: I1206 06:30:05.149822 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53a18f23-f29e-43ba-8568-855cb4550b7b" path="/var/lib/kubelet/pods/53a18f23-f29e-43ba-8568-855cb4550b7b/volumes" Dec 06 06:30:05 crc kubenswrapper[4823]: I1206 06:30:05.151045 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" path="/var/lib/kubelet/pods/6ade1bd9-4ca5-4910-8989-09b55a67bd0e/volumes" Dec 06 06:30:05 crc kubenswrapper[4823]: I1206 06:30:05.151757 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edd27de-5a66-4fbb-ac77-6889ff93d1b4" path="/var/lib/kubelet/pods/6edd27de-5a66-4fbb-ac77-6889ff93d1b4/volumes" Dec 06 06:30:05 crc kubenswrapper[4823]: I1206 06:30:05.153317 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="924b1003-afd5-49e2-883d-12b314c93876" path="/var/lib/kubelet/pods/924b1003-afd5-49e2-883d-12b314c93876/volumes" Dec 06 06:30:05 crc kubenswrapper[4823]: I1206 06:30:05.154324 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab79175b-ce4b-4ad8-863b-31fe71624804" path="/var/lib/kubelet/pods/ab79175b-ce4b-4ad8-863b-31fe71624804/volumes" Dec 06 06:30:05 crc kubenswrapper[4823]: I1206 06:30:05.155054 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" path="/var/lib/kubelet/pods/c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1/volumes" Dec 06 06:30:05 crc kubenswrapper[4823]: I1206 06:30:05.156318 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfb88fb7-5645-4804-a359-800d2b14fabe" path="/var/lib/kubelet/pods/dfb88fb7-5645-4804-a359-800d2b14fabe/volumes" Dec 06 06:30:08 crc kubenswrapper[4823]: I1206 06:30:08.326623 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 06 06:30:08 crc kubenswrapper[4823]: I1206 06:30:08.328087 4823 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="15fe4520bf4158b939618de43a3ae2bd1beb9e351da90651b23f5055aafb28c3" exitCode=137 Dec 06 06:30:08 crc kubenswrapper[4823]: I1206 06:30:08.395894 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 06 06:30:08 crc kubenswrapper[4823]: I1206 06:30:08.395980 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 06:30:08 crc kubenswrapper[4823]: I1206 06:30:08.517180 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 06 06:30:08 crc kubenswrapper[4823]: I1206 06:30:08.517236 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 06 06:30:08 crc kubenswrapper[4823]: I1206 06:30:08.517293 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 06 06:30:08 crc kubenswrapper[4823]: I1206 06:30:08.517371 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 06 06:30:08 crc kubenswrapper[4823]: I1206 06:30:08.517394 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 06 06:30:08 crc kubenswrapper[4823]: I1206 06:30:08.517686 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:30:08 crc kubenswrapper[4823]: I1206 06:30:08.517736 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:30:08 crc kubenswrapper[4823]: I1206 06:30:08.517744 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:30:08 crc kubenswrapper[4823]: I1206 06:30:08.517908 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:30:08 crc kubenswrapper[4823]: I1206 06:30:08.528958 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:30:08 crc kubenswrapper[4823]: I1206 06:30:08.619088 4823 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:08 crc kubenswrapper[4823]: I1206 06:30:08.619133 4823 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:08 crc kubenswrapper[4823]: I1206 06:30:08.619144 4823 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:08 crc kubenswrapper[4823]: I1206 06:30:08.619156 4823 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:08 crc kubenswrapper[4823]: I1206 06:30:08.619164 4823 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:09 crc kubenswrapper[4823]: I1206 06:30:09.147106 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 06 06:30:09 crc kubenswrapper[4823]: I1206 06:30:09.335147 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 06 06:30:09 crc kubenswrapper[4823]: I1206 06:30:09.335524 4823 scope.go:117] "RemoveContainer" containerID="15fe4520bf4158b939618de43a3ae2bd1beb9e351da90651b23f5055aafb28c3" Dec 06 06:30:09 crc kubenswrapper[4823]: I1206 06:30:09.335553 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 06:30:31 crc kubenswrapper[4823]: I1206 06:30:31.586094 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rnpnm"] Dec 06 06:30:31 crc kubenswrapper[4823]: I1206 06:30:31.587056 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" podUID="9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f" containerName="controller-manager" containerID="cri-o://79d6f959faba29907ac2c8c8fcc0cad26f3757850c6d9c5a585ef0d9cafd8edf" gracePeriod=30 Dec 06 06:30:31 crc kubenswrapper[4823]: I1206 06:30:31.718487 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5"] Dec 06 06:30:31 crc kubenswrapper[4823]: I1206 06:30:31.718717 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5" podUID="384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e" containerName="route-controller-manager" containerID="cri-o://c00ef277082b15665e136ebbb5018a97cf155dc39c308c78ba0727718b5ff5cf" gracePeriod=30 Dec 06 06:30:31 crc kubenswrapper[4823]: I1206 06:30:31.920921 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" Dec 06 06:30:31 crc kubenswrapper[4823]: I1206 06:30:31.924712 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-serving-cert\") pod \"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f\" (UID: \"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f\") " Dec 06 06:30:31 crc kubenswrapper[4823]: I1206 06:30:31.924755 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-client-ca\") pod \"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f\" (UID: \"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f\") " Dec 06 06:30:31 crc kubenswrapper[4823]: I1206 06:30:31.924857 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfhsm\" (UniqueName: \"kubernetes.io/projected/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-kube-api-access-vfhsm\") pod \"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f\" (UID: \"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f\") " Dec 06 06:30:31 crc kubenswrapper[4823]: I1206 06:30:31.924893 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-proxy-ca-bundles\") pod \"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f\" (UID: \"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f\") " Dec 06 06:30:31 crc kubenswrapper[4823]: I1206 06:30:31.925808 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f" (UID: "9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:30:31 crc kubenswrapper[4823]: I1206 06:30:31.925822 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-client-ca" (OuterVolumeSpecName: "client-ca") pod "9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f" (UID: "9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:30:31 crc kubenswrapper[4823]: I1206 06:30:31.942076 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-kube-api-access-vfhsm" (OuterVolumeSpecName: "kube-api-access-vfhsm") pod "9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f" (UID: "9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f"). InnerVolumeSpecName "kube-api-access-vfhsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:30:31 crc kubenswrapper[4823]: I1206 06:30:31.942714 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f" (UID: "9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.013879 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.025586 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-config\") pod \"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f\" (UID: \"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f\") " Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.025810 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.025828 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfhsm\" (UniqueName: \"kubernetes.io/projected/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-kube-api-access-vfhsm\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.025838 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.025847 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.026331 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-config" (OuterVolumeSpecName: "config") pod "9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f" (UID: "9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.127263 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e-serving-cert\") pod \"384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e\" (UID: \"384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e\") " Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.127412 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-489d6\" (UniqueName: \"kubernetes.io/projected/384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e-kube-api-access-489d6\") pod \"384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e\" (UID: \"384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e\") " Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.127462 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e-client-ca\") pod \"384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e\" (UID: \"384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e\") " Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.127510 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e-config\") pod \"384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e\" (UID: \"384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e\") " Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.127801 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.128370 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e-client-ca" (OuterVolumeSpecName: "client-ca") pod "384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e" (UID: "384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.128395 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e-config" (OuterVolumeSpecName: "config") pod "384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e" (UID: "384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.130672 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e" (UID: "384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.130769 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e-kube-api-access-489d6" (OuterVolumeSpecName: "kube-api-access-489d6") pod "384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e" (UID: "384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e"). InnerVolumeSpecName "kube-api-access-489d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.229106 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-489d6\" (UniqueName: \"kubernetes.io/projected/384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e-kube-api-access-489d6\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.229142 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.229156 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.229164 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.455905 4823 generic.go:334] "Generic (PLEG): container finished" podID="9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f" containerID="79d6f959faba29907ac2c8c8fcc0cad26f3757850c6d9c5a585ef0d9cafd8edf" exitCode=0 Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.455968 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.455992 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" event={"ID":"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f","Type":"ContainerDied","Data":"79d6f959faba29907ac2c8c8fcc0cad26f3757850c6d9c5a585ef0d9cafd8edf"} Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.456027 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rnpnm" event={"ID":"9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f","Type":"ContainerDied","Data":"3ae501e5cba20fed10fa6d283353c6e0feebbdea80bd6e85efe7ccde6ecd5303"} Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.456048 4823 scope.go:117] "RemoveContainer" containerID="79d6f959faba29907ac2c8c8fcc0cad26f3757850c6d9c5a585ef0d9cafd8edf" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.459058 4823 generic.go:334] "Generic (PLEG): container finished" podID="384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e" containerID="c00ef277082b15665e136ebbb5018a97cf155dc39c308c78ba0727718b5ff5cf" exitCode=0 Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.459119 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5" event={"ID":"384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e","Type":"ContainerDied","Data":"c00ef277082b15665e136ebbb5018a97cf155dc39c308c78ba0727718b5ff5cf"} Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.459142 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5" event={"ID":"384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e","Type":"ContainerDied","Data":"f8077a323114c5f6f30ba3cf97c425a5b118788b392198f289a2a973973d3306"} Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.459213 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.471533 4823 scope.go:117] "RemoveContainer" containerID="79d6f959faba29907ac2c8c8fcc0cad26f3757850c6d9c5a585ef0d9cafd8edf" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.473637 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79d6f959faba29907ac2c8c8fcc0cad26f3757850c6d9c5a585ef0d9cafd8edf\": container with ID starting with 79d6f959faba29907ac2c8c8fcc0cad26f3757850c6d9c5a585ef0d9cafd8edf not found: ID does not exist" containerID="79d6f959faba29907ac2c8c8fcc0cad26f3757850c6d9c5a585ef0d9cafd8edf" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.473996 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79d6f959faba29907ac2c8c8fcc0cad26f3757850c6d9c5a585ef0d9cafd8edf"} err="failed to get container status \"79d6f959faba29907ac2c8c8fcc0cad26f3757850c6d9c5a585ef0d9cafd8edf\": rpc error: code = NotFound desc = could not find container \"79d6f959faba29907ac2c8c8fcc0cad26f3757850c6d9c5a585ef0d9cafd8edf\": container with ID starting with 79d6f959faba29907ac2c8c8fcc0cad26f3757850c6d9c5a585ef0d9cafd8edf not found: ID does not exist" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.474028 4823 scope.go:117] "RemoveContainer" containerID="c00ef277082b15665e136ebbb5018a97cf155dc39c308c78ba0727718b5ff5cf" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.488789 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rnpnm"] Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.490844 4823 scope.go:117] "RemoveContainer" containerID="c00ef277082b15665e136ebbb5018a97cf155dc39c308c78ba0727718b5ff5cf" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.492015 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c00ef277082b15665e136ebbb5018a97cf155dc39c308c78ba0727718b5ff5cf\": container with ID starting with c00ef277082b15665e136ebbb5018a97cf155dc39c308c78ba0727718b5ff5cf not found: ID does not exist" containerID="c00ef277082b15665e136ebbb5018a97cf155dc39c308c78ba0727718b5ff5cf" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.492064 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c00ef277082b15665e136ebbb5018a97cf155dc39c308c78ba0727718b5ff5cf"} err="failed to get container status \"c00ef277082b15665e136ebbb5018a97cf155dc39c308c78ba0727718b5ff5cf\": rpc error: code = NotFound desc = could not find container \"c00ef277082b15665e136ebbb5018a97cf155dc39c308c78ba0727718b5ff5cf\": container with ID starting with c00ef277082b15665e136ebbb5018a97cf155dc39c308c78ba0727718b5ff5cf not found: ID does not exist" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.494198 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rnpnm"] Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.496999 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5"] Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.499416 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zchg5"] Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629195 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-568d55fbb8-p5jtm"] Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629426 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="125800d8-7679-4574-8992-181928f47efc" containerName="extract-content" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629441 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="125800d8-7679-4574-8992-181928f47efc" containerName="extract-content" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629452 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfb88fb7-5645-4804-a359-800d2b14fabe" containerName="extract-content" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629463 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfb88fb7-5645-4804-a359-800d2b14fabe" containerName="extract-content" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629477 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="924b1003-afd5-49e2-883d-12b314c93876" containerName="extract-utilities" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629486 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="924b1003-afd5-49e2-883d-12b314c93876" containerName="extract-utilities" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629495 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="130f260b-b329-499b-a6ff-b15b96d8bf7d" containerName="extract-content" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629502 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="130f260b-b329-499b-a6ff-b15b96d8bf7d" containerName="extract-content" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629511 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6edd27de-5a66-4fbb-ac77-6889ff93d1b4" containerName="registry-server" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629518 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6edd27de-5a66-4fbb-ac77-6889ff93d1b4" containerName="registry-server" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629530 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" containerName="extract-content" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629538 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" containerName="extract-content" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629550 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" containerName="extract-utilities" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629558 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" containerName="extract-utilities" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629569 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab79175b-ce4b-4ad8-863b-31fe71624804" containerName="extract-content" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629577 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab79175b-ce4b-4ad8-863b-31fe71624804" containerName="extract-content" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629587 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" containerName="extract-utilities" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629594 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" containerName="extract-utilities" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629603 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab79175b-ce4b-4ad8-863b-31fe71624804" containerName="registry-server" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629610 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab79175b-ce4b-4ad8-863b-31fe71624804" containerName="registry-server" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629620 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f" containerName="controller-manager" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629628 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f" containerName="controller-manager" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629640 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfb88fb7-5645-4804-a359-800d2b14fabe" containerName="extract-utilities" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629648 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfb88fb7-5645-4804-a359-800d2b14fabe" containerName="extract-utilities" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629678 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" containerName="registry-server" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629687 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" containerName="registry-server" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629696 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629704 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629715 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="924b1003-afd5-49e2-883d-12b314c93876" containerName="extract-content" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629722 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="924b1003-afd5-49e2-883d-12b314c93876" containerName="extract-content" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629731 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" containerName="extract-content" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629739 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" containerName="extract-content" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629750 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e75b29f-2105-4ab0-9bc8-102729c188d2" containerName="collect-profiles" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629757 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e75b29f-2105-4ab0-9bc8-102729c188d2" containerName="collect-profiles" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629766 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="130f260b-b329-499b-a6ff-b15b96d8bf7d" containerName="registry-server" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629774 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="130f260b-b329-499b-a6ff-b15b96d8bf7d" containerName="registry-server" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629783 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6edd27de-5a66-4fbb-ac77-6889ff93d1b4" containerName="extract-content" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629790 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6edd27de-5a66-4fbb-ac77-6889ff93d1b4" containerName="extract-content" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629798 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="924b1003-afd5-49e2-883d-12b314c93876" containerName="registry-server" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629806 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="924b1003-afd5-49e2-883d-12b314c93876" containerName="registry-server" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629815 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" containerName="registry-server" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629823 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" containerName="registry-server" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629831 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="125800d8-7679-4574-8992-181928f47efc" containerName="registry-server" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629839 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="125800d8-7679-4574-8992-181928f47efc" containerName="registry-server" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629848 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab79175b-ce4b-4ad8-863b-31fe71624804" containerName="extract-utilities" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629856 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab79175b-ce4b-4ad8-863b-31fe71624804" containerName="extract-utilities" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629865 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="125800d8-7679-4574-8992-181928f47efc" containerName="extract-utilities" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629873 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="125800d8-7679-4574-8992-181928f47efc" containerName="extract-utilities" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629881 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfb88fb7-5645-4804-a359-800d2b14fabe" containerName="registry-server" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629889 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfb88fb7-5645-4804-a359-800d2b14fabe" containerName="registry-server" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629898 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="130f260b-b329-499b-a6ff-b15b96d8bf7d" containerName="extract-utilities" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629906 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="130f260b-b329-499b-a6ff-b15b96d8bf7d" containerName="extract-utilities" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629918 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6edd27de-5a66-4fbb-ac77-6889ff93d1b4" containerName="extract-utilities" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629926 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6edd27de-5a66-4fbb-ac77-6889ff93d1b4" containerName="extract-utilities" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629935 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e" containerName="route-controller-manager" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629943 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e" containerName="route-controller-manager" Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.629950 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53a18f23-f29e-43ba-8568-855cb4550b7b" containerName="marketplace-operator" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.629958 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="53a18f23-f29e-43ba-8568-855cb4550b7b" containerName="marketplace-operator" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.630064 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="125800d8-7679-4574-8992-181928f47efc" containerName="registry-server" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.630075 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a8777c-f9f5-4a33-9dc8-93b93edc6fa1" containerName="registry-server" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.630087 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="924b1003-afd5-49e2-883d-12b314c93876" containerName="registry-server" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.630101 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e" containerName="route-controller-manager" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.630112 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e75b29f-2105-4ab0-9bc8-102729c188d2" containerName="collect-profiles" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.630121 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="130f260b-b329-499b-a6ff-b15b96d8bf7d" containerName="registry-server" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.630132 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="53a18f23-f29e-43ba-8568-855cb4550b7b" containerName="marketplace-operator" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.630141 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="6edd27de-5a66-4fbb-ac77-6889ff93d1b4" containerName="registry-server" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.630150 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f" containerName="controller-manager" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.630158 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ade1bd9-4ca5-4910-8989-09b55a67bd0e" containerName="registry-server" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.630169 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab79175b-ce4b-4ad8-863b-31fe71624804" containerName="registry-server" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.630178 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.630188 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfb88fb7-5645-4804-a359-800d2b14fabe" containerName="registry-server" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.630637 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-568d55fbb8-p5jtm" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.635109 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17169b08-cdf2-4035-9b1f-368827514331-client-ca\") pod \"controller-manager-568d55fbb8-p5jtm\" (UID: \"17169b08-cdf2-4035-9b1f-368827514331\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-p5jtm" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.635158 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17169b08-cdf2-4035-9b1f-368827514331-proxy-ca-bundles\") pod \"controller-manager-568d55fbb8-p5jtm\" (UID: \"17169b08-cdf2-4035-9b1f-368827514331\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-p5jtm" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.635177 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17169b08-cdf2-4035-9b1f-368827514331-serving-cert\") pod \"controller-manager-568d55fbb8-p5jtm\" (UID: \"17169b08-cdf2-4035-9b1f-368827514331\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-p5jtm" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.635231 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwq89\" (UniqueName: \"kubernetes.io/projected/17169b08-cdf2-4035-9b1f-368827514331-kube-api-access-kwq89\") pod \"controller-manager-568d55fbb8-p5jtm\" (UID: \"17169b08-cdf2-4035-9b1f-368827514331\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-p5jtm" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.635258 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17169b08-cdf2-4035-9b1f-368827514331-config\") pod \"controller-manager-568d55fbb8-p5jtm\" (UID: \"17169b08-cdf2-4035-9b1f-368827514331\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-p5jtm" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.636042 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.636168 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.637220 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.637746 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.637876 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.640950 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.644581 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.652052 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-568d55fbb8-p5jtm"] Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.660963 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fcf58c46-r2hs9"] Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.661959 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-r2hs9" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.664832 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.667475 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.667793 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.667843 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.668098 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.668366 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.675122 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fcf58c46-r2hs9"] Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.707204 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-568d55fbb8-p5jtm"] Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.707552 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-kwq89 proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-568d55fbb8-p5jtm" podUID="17169b08-cdf2-4035-9b1f-368827514331" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.736519 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f960ca-da18-4dc3-804a-39e2fbb65830-serving-cert\") pod \"route-controller-manager-58fcf58c46-r2hs9\" (UID: \"15f960ca-da18-4dc3-804a-39e2fbb65830\") " pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-r2hs9" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.736620 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15f960ca-da18-4dc3-804a-39e2fbb65830-config\") pod \"route-controller-manager-58fcf58c46-r2hs9\" (UID: \"15f960ca-da18-4dc3-804a-39e2fbb65830\") " pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-r2hs9" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.736678 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwq89\" (UniqueName: \"kubernetes.io/projected/17169b08-cdf2-4035-9b1f-368827514331-kube-api-access-kwq89\") pod \"controller-manager-568d55fbb8-p5jtm\" (UID: \"17169b08-cdf2-4035-9b1f-368827514331\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-p5jtm" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.736747 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15f960ca-da18-4dc3-804a-39e2fbb65830-client-ca\") pod \"route-controller-manager-58fcf58c46-r2hs9\" (UID: \"15f960ca-da18-4dc3-804a-39e2fbb65830\") " pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-r2hs9" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.736823 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17169b08-cdf2-4035-9b1f-368827514331-config\") pod \"controller-manager-568d55fbb8-p5jtm\" (UID: \"17169b08-cdf2-4035-9b1f-368827514331\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-p5jtm" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.736911 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpz2k\" (UniqueName: \"kubernetes.io/projected/15f960ca-da18-4dc3-804a-39e2fbb65830-kube-api-access-cpz2k\") pod \"route-controller-manager-58fcf58c46-r2hs9\" (UID: \"15f960ca-da18-4dc3-804a-39e2fbb65830\") " pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-r2hs9" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.736947 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17169b08-cdf2-4035-9b1f-368827514331-client-ca\") pod \"controller-manager-568d55fbb8-p5jtm\" (UID: \"17169b08-cdf2-4035-9b1f-368827514331\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-p5jtm" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.737095 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17169b08-cdf2-4035-9b1f-368827514331-serving-cert\") pod \"controller-manager-568d55fbb8-p5jtm\" (UID: \"17169b08-cdf2-4035-9b1f-368827514331\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-p5jtm" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.737154 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17169b08-cdf2-4035-9b1f-368827514331-proxy-ca-bundles\") pod \"controller-manager-568d55fbb8-p5jtm\" (UID: \"17169b08-cdf2-4035-9b1f-368827514331\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-p5jtm" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.738111 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17169b08-cdf2-4035-9b1f-368827514331-client-ca\") pod \"controller-manager-568d55fbb8-p5jtm\" (UID: \"17169b08-cdf2-4035-9b1f-368827514331\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-p5jtm" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.738412 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17169b08-cdf2-4035-9b1f-368827514331-config\") pod \"controller-manager-568d55fbb8-p5jtm\" (UID: \"17169b08-cdf2-4035-9b1f-368827514331\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-p5jtm" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.738875 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17169b08-cdf2-4035-9b1f-368827514331-proxy-ca-bundles\") pod \"controller-manager-568d55fbb8-p5jtm\" (UID: \"17169b08-cdf2-4035-9b1f-368827514331\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-p5jtm" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.740951 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17169b08-cdf2-4035-9b1f-368827514331-serving-cert\") pod \"controller-manager-568d55fbb8-p5jtm\" (UID: \"17169b08-cdf2-4035-9b1f-368827514331\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-p5jtm" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.755214 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwq89\" (UniqueName: \"kubernetes.io/projected/17169b08-cdf2-4035-9b1f-368827514331-kube-api-access-kwq89\") pod \"controller-manager-568d55fbb8-p5jtm\" (UID: \"17169b08-cdf2-4035-9b1f-368827514331\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-p5jtm" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.812816 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fcf58c46-r2hs9"] Dec 06 06:30:32 crc kubenswrapper[4823]: E1206 06:30:32.813282 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-cpz2k serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-r2hs9" podUID="15f960ca-da18-4dc3-804a-39e2fbb65830" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.838996 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f960ca-da18-4dc3-804a-39e2fbb65830-serving-cert\") pod \"route-controller-manager-58fcf58c46-r2hs9\" (UID: \"15f960ca-da18-4dc3-804a-39e2fbb65830\") " pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-r2hs9" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.839090 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15f960ca-da18-4dc3-804a-39e2fbb65830-config\") pod \"route-controller-manager-58fcf58c46-r2hs9\" (UID: \"15f960ca-da18-4dc3-804a-39e2fbb65830\") " pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-r2hs9" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.839118 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15f960ca-da18-4dc3-804a-39e2fbb65830-client-ca\") pod \"route-controller-manager-58fcf58c46-r2hs9\" (UID: \"15f960ca-da18-4dc3-804a-39e2fbb65830\") " pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-r2hs9" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.839166 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpz2k\" (UniqueName: \"kubernetes.io/projected/15f960ca-da18-4dc3-804a-39e2fbb65830-kube-api-access-cpz2k\") pod \"route-controller-manager-58fcf58c46-r2hs9\" (UID: \"15f960ca-da18-4dc3-804a-39e2fbb65830\") " pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-r2hs9" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.840309 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15f960ca-da18-4dc3-804a-39e2fbb65830-client-ca\") pod \"route-controller-manager-58fcf58c46-r2hs9\" (UID: \"15f960ca-da18-4dc3-804a-39e2fbb65830\") " pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-r2hs9" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.840549 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15f960ca-da18-4dc3-804a-39e2fbb65830-config\") pod \"route-controller-manager-58fcf58c46-r2hs9\" (UID: \"15f960ca-da18-4dc3-804a-39e2fbb65830\") " pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-r2hs9" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.843608 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f960ca-da18-4dc3-804a-39e2fbb65830-serving-cert\") pod \"route-controller-manager-58fcf58c46-r2hs9\" (UID: \"15f960ca-da18-4dc3-804a-39e2fbb65830\") " pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-r2hs9" Dec 06 06:30:32 crc kubenswrapper[4823]: I1206 06:30:32.859439 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpz2k\" (UniqueName: \"kubernetes.io/projected/15f960ca-da18-4dc3-804a-39e2fbb65830-kube-api-access-cpz2k\") pod \"route-controller-manager-58fcf58c46-r2hs9\" (UID: \"15f960ca-da18-4dc3-804a-39e2fbb65830\") " pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-r2hs9" Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.147781 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e" path="/var/lib/kubelet/pods/384c5f15-0c46-4b8f-9dd7-3fc2253a0a3e/volumes" Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.148353 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f" path="/var/lib/kubelet/pods/9a8b35ec-e034-4e77-9fa9-9b62b0cbd82f/volumes" Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.469985 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-568d55fbb8-p5jtm" Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.470005 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-r2hs9" Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.477895 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-568d55fbb8-p5jtm" Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.482932 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-r2hs9" Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.548790 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwq89\" (UniqueName: \"kubernetes.io/projected/17169b08-cdf2-4035-9b1f-368827514331-kube-api-access-kwq89\") pod \"17169b08-cdf2-4035-9b1f-368827514331\" (UID: \"17169b08-cdf2-4035-9b1f-368827514331\") " Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.548897 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17169b08-cdf2-4035-9b1f-368827514331-serving-cert\") pod \"17169b08-cdf2-4035-9b1f-368827514331\" (UID: \"17169b08-cdf2-4035-9b1f-368827514331\") " Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.548926 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15f960ca-da18-4dc3-804a-39e2fbb65830-config\") pod \"15f960ca-da18-4dc3-804a-39e2fbb65830\" (UID: \"15f960ca-da18-4dc3-804a-39e2fbb65830\") " Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.548991 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15f960ca-da18-4dc3-804a-39e2fbb65830-client-ca\") pod \"15f960ca-da18-4dc3-804a-39e2fbb65830\" (UID: \"15f960ca-da18-4dc3-804a-39e2fbb65830\") " Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.549651 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15f960ca-da18-4dc3-804a-39e2fbb65830-client-ca" (OuterVolumeSpecName: "client-ca") pod "15f960ca-da18-4dc3-804a-39e2fbb65830" (UID: "15f960ca-da18-4dc3-804a-39e2fbb65830"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.549716 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15f960ca-da18-4dc3-804a-39e2fbb65830-config" (OuterVolumeSpecName: "config") pod "15f960ca-da18-4dc3-804a-39e2fbb65830" (UID: "15f960ca-da18-4dc3-804a-39e2fbb65830"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.549758 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17169b08-cdf2-4035-9b1f-368827514331-proxy-ca-bundles\") pod \"17169b08-cdf2-4035-9b1f-368827514331\" (UID: \"17169b08-cdf2-4035-9b1f-368827514331\") " Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.549824 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17169b08-cdf2-4035-9b1f-368827514331-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "17169b08-cdf2-4035-9b1f-368827514331" (UID: "17169b08-cdf2-4035-9b1f-368827514331"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.549881 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f960ca-da18-4dc3-804a-39e2fbb65830-serving-cert\") pod \"15f960ca-da18-4dc3-804a-39e2fbb65830\" (UID: \"15f960ca-da18-4dc3-804a-39e2fbb65830\") " Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.549934 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17169b08-cdf2-4035-9b1f-368827514331-client-ca\") pod \"17169b08-cdf2-4035-9b1f-368827514331\" (UID: \"17169b08-cdf2-4035-9b1f-368827514331\") " Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.549959 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpz2k\" (UniqueName: \"kubernetes.io/projected/15f960ca-da18-4dc3-804a-39e2fbb65830-kube-api-access-cpz2k\") pod \"15f960ca-da18-4dc3-804a-39e2fbb65830\" (UID: \"15f960ca-da18-4dc3-804a-39e2fbb65830\") " Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.550011 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17169b08-cdf2-4035-9b1f-368827514331-config\") pod \"17169b08-cdf2-4035-9b1f-368827514331\" (UID: \"17169b08-cdf2-4035-9b1f-368827514331\") " Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.550405 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15f960ca-da18-4dc3-804a-39e2fbb65830-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.550435 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17169b08-cdf2-4035-9b1f-368827514331-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.550450 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15f960ca-da18-4dc3-804a-39e2fbb65830-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.550507 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17169b08-cdf2-4035-9b1f-368827514331-config" (OuterVolumeSpecName: "config") pod "17169b08-cdf2-4035-9b1f-368827514331" (UID: "17169b08-cdf2-4035-9b1f-368827514331"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.550789 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17169b08-cdf2-4035-9b1f-368827514331-client-ca" (OuterVolumeSpecName: "client-ca") pod "17169b08-cdf2-4035-9b1f-368827514331" (UID: "17169b08-cdf2-4035-9b1f-368827514331"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.552327 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17169b08-cdf2-4035-9b1f-368827514331-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "17169b08-cdf2-4035-9b1f-368827514331" (UID: "17169b08-cdf2-4035-9b1f-368827514331"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.552546 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17169b08-cdf2-4035-9b1f-368827514331-kube-api-access-kwq89" (OuterVolumeSpecName: "kube-api-access-kwq89") pod "17169b08-cdf2-4035-9b1f-368827514331" (UID: "17169b08-cdf2-4035-9b1f-368827514331"). InnerVolumeSpecName "kube-api-access-kwq89". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.552600 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15f960ca-da18-4dc3-804a-39e2fbb65830-kube-api-access-cpz2k" (OuterVolumeSpecName: "kube-api-access-cpz2k") pod "15f960ca-da18-4dc3-804a-39e2fbb65830" (UID: "15f960ca-da18-4dc3-804a-39e2fbb65830"). InnerVolumeSpecName "kube-api-access-cpz2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.552753 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15f960ca-da18-4dc3-804a-39e2fbb65830-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "15f960ca-da18-4dc3-804a-39e2fbb65830" (UID: "15f960ca-da18-4dc3-804a-39e2fbb65830"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.651697 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17169b08-cdf2-4035-9b1f-368827514331-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.651735 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwq89\" (UniqueName: \"kubernetes.io/projected/17169b08-cdf2-4035-9b1f-368827514331-kube-api-access-kwq89\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.651747 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17169b08-cdf2-4035-9b1f-368827514331-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.651758 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f960ca-da18-4dc3-804a-39e2fbb65830-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.651766 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17169b08-cdf2-4035-9b1f-368827514331-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:33 crc kubenswrapper[4823]: I1206 06:30:33.651774 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpz2k\" (UniqueName: \"kubernetes.io/projected/15f960ca-da18-4dc3-804a-39e2fbb65830-kube-api-access-cpz2k\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.474694 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-r2hs9" Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.477823 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-568d55fbb8-p5jtm" Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.511142 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7"] Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.512110 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7" Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.515086 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.515860 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.516022 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.516136 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.516439 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.516533 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.528271 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fcf58c46-r2hs9"] Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.568739 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fcf58c46-r2hs9"] Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.574489 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7"] Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.605569 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-568d55fbb8-p5jtm"] Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.608799 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-568d55fbb8-p5jtm"] Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.667246 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21b088be-f212-48ba-bf21-f5aebbd8a655-serving-cert\") pod \"route-controller-manager-5cfddd64bd-qbdd7\" (UID: \"21b088be-f212-48ba-bf21-f5aebbd8a655\") " pod="openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7" Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.667307 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21b088be-f212-48ba-bf21-f5aebbd8a655-client-ca\") pod \"route-controller-manager-5cfddd64bd-qbdd7\" (UID: \"21b088be-f212-48ba-bf21-f5aebbd8a655\") " pod="openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7" Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.667358 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b088be-f212-48ba-bf21-f5aebbd8a655-config\") pod \"route-controller-manager-5cfddd64bd-qbdd7\" (UID: \"21b088be-f212-48ba-bf21-f5aebbd8a655\") " pod="openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7" Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.667386 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb8tg\" (UniqueName: \"kubernetes.io/projected/21b088be-f212-48ba-bf21-f5aebbd8a655-kube-api-access-wb8tg\") pod \"route-controller-manager-5cfddd64bd-qbdd7\" (UID: \"21b088be-f212-48ba-bf21-f5aebbd8a655\") " pod="openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7" Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.769138 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21b088be-f212-48ba-bf21-f5aebbd8a655-serving-cert\") pod \"route-controller-manager-5cfddd64bd-qbdd7\" (UID: \"21b088be-f212-48ba-bf21-f5aebbd8a655\") " pod="openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7" Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.769536 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21b088be-f212-48ba-bf21-f5aebbd8a655-client-ca\") pod \"route-controller-manager-5cfddd64bd-qbdd7\" (UID: \"21b088be-f212-48ba-bf21-f5aebbd8a655\") " pod="openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7" Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.769576 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b088be-f212-48ba-bf21-f5aebbd8a655-config\") pod \"route-controller-manager-5cfddd64bd-qbdd7\" (UID: \"21b088be-f212-48ba-bf21-f5aebbd8a655\") " pod="openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7" Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.769601 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb8tg\" (UniqueName: \"kubernetes.io/projected/21b088be-f212-48ba-bf21-f5aebbd8a655-kube-api-access-wb8tg\") pod \"route-controller-manager-5cfddd64bd-qbdd7\" (UID: \"21b088be-f212-48ba-bf21-f5aebbd8a655\") " pod="openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7" Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.770616 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21b088be-f212-48ba-bf21-f5aebbd8a655-client-ca\") pod \"route-controller-manager-5cfddd64bd-qbdd7\" (UID: \"21b088be-f212-48ba-bf21-f5aebbd8a655\") " pod="openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7" Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.770864 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b088be-f212-48ba-bf21-f5aebbd8a655-config\") pod \"route-controller-manager-5cfddd64bd-qbdd7\" (UID: \"21b088be-f212-48ba-bf21-f5aebbd8a655\") " pod="openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7" Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.774143 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21b088be-f212-48ba-bf21-f5aebbd8a655-serving-cert\") pod \"route-controller-manager-5cfddd64bd-qbdd7\" (UID: \"21b088be-f212-48ba-bf21-f5aebbd8a655\") " pod="openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7" Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.794308 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb8tg\" (UniqueName: \"kubernetes.io/projected/21b088be-f212-48ba-bf21-f5aebbd8a655-kube-api-access-wb8tg\") pod \"route-controller-manager-5cfddd64bd-qbdd7\" (UID: \"21b088be-f212-48ba-bf21-f5aebbd8a655\") " pod="openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7" Dec 06 06:30:34 crc kubenswrapper[4823]: I1206 06:30:34.826841 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7" Dec 06 06:30:35 crc kubenswrapper[4823]: I1206 06:30:35.005727 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7"] Dec 06 06:30:35 crc kubenswrapper[4823]: I1206 06:30:35.147744 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15f960ca-da18-4dc3-804a-39e2fbb65830" path="/var/lib/kubelet/pods/15f960ca-da18-4dc3-804a-39e2fbb65830/volumes" Dec 06 06:30:35 crc kubenswrapper[4823]: I1206 06:30:35.148372 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17169b08-cdf2-4035-9b1f-368827514331" path="/var/lib/kubelet/pods/17169b08-cdf2-4035-9b1f-368827514331/volumes" Dec 06 06:30:35 crc kubenswrapper[4823]: I1206 06:30:35.480705 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7" event={"ID":"21b088be-f212-48ba-bf21-f5aebbd8a655","Type":"ContainerStarted","Data":"5c6bedd8a552b8c9c2b81496b3d3615800618111f3d1ff7a3c6e9e34d4b985ef"} Dec 06 06:30:35 crc kubenswrapper[4823]: I1206 06:30:35.480765 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7" event={"ID":"21b088be-f212-48ba-bf21-f5aebbd8a655","Type":"ContainerStarted","Data":"c7e44e861e8e220315000528de7e9d7c8f2c446a28a9f46637acec06ee0efd50"} Dec 06 06:30:35 crc kubenswrapper[4823]: I1206 06:30:35.481162 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7" Dec 06 06:30:35 crc kubenswrapper[4823]: I1206 06:30:35.499181 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7" podStartSLOduration=3.499159193 podStartE2EDuration="3.499159193s" podCreationTimestamp="2025-12-06 06:30:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:30:35.496783202 +0000 UTC m=+336.782535172" watchObservedRunningTime="2025-12-06 06:30:35.499159193 +0000 UTC m=+336.784911163" Dec 06 06:30:35 crc kubenswrapper[4823]: I1206 06:30:35.574930 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7" Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.323458 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml"] Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.324427 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.326699 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.327191 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.327183 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.327607 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.327698 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.327743 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.333560 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.335284 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml"] Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.500550 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-config\") pod \"controller-manager-66fdc6dc6c-vzzml\" (UID: \"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9\") " pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.500597 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv2rg\" (UniqueName: \"kubernetes.io/projected/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-kube-api-access-bv2rg\") pod \"controller-manager-66fdc6dc6c-vzzml\" (UID: \"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9\") " pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.500679 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-proxy-ca-bundles\") pod \"controller-manager-66fdc6dc6c-vzzml\" (UID: \"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9\") " pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.500721 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-serving-cert\") pod \"controller-manager-66fdc6dc6c-vzzml\" (UID: \"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9\") " pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.500751 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-client-ca\") pod \"controller-manager-66fdc6dc6c-vzzml\" (UID: \"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9\") " pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.601358 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-serving-cert\") pod \"controller-manager-66fdc6dc6c-vzzml\" (UID: \"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9\") " pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.601831 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-client-ca\") pod \"controller-manager-66fdc6dc6c-vzzml\" (UID: \"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9\") " pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.601991 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-config\") pod \"controller-manager-66fdc6dc6c-vzzml\" (UID: \"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9\") " pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.602083 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv2rg\" (UniqueName: \"kubernetes.io/projected/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-kube-api-access-bv2rg\") pod \"controller-manager-66fdc6dc6c-vzzml\" (UID: \"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9\") " pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.602184 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-proxy-ca-bundles\") pod \"controller-manager-66fdc6dc6c-vzzml\" (UID: \"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9\") " pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.602656 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-client-ca\") pod \"controller-manager-66fdc6dc6c-vzzml\" (UID: \"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9\") " pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.603174 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-proxy-ca-bundles\") pod \"controller-manager-66fdc6dc6c-vzzml\" (UID: \"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9\") " pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.603361 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-config\") pod \"controller-manager-66fdc6dc6c-vzzml\" (UID: \"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9\") " pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.608954 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-serving-cert\") pod \"controller-manager-66fdc6dc6c-vzzml\" (UID: \"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9\") " pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.620456 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv2rg\" (UniqueName: \"kubernetes.io/projected/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-kube-api-access-bv2rg\") pod \"controller-manager-66fdc6dc6c-vzzml\" (UID: \"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9\") " pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.644013 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" Dec 06 06:30:37 crc kubenswrapper[4823]: I1206 06:30:37.897284 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml"] Dec 06 06:30:38 crc kubenswrapper[4823]: I1206 06:30:38.499061 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" event={"ID":"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9","Type":"ContainerStarted","Data":"be629ce6f8fec472986f684b6de0d6e0ab649f5ed6a3f72187e147131760af54"} Dec 06 06:30:38 crc kubenswrapper[4823]: I1206 06:30:38.499395 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" event={"ID":"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9","Type":"ContainerStarted","Data":"5652952ea0c2560d4e6b8f727706012b8f38649c48d25c74153ebef054df26fc"} Dec 06 06:30:38 crc kubenswrapper[4823]: I1206 06:30:38.499412 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" Dec 06 06:30:38 crc kubenswrapper[4823]: I1206 06:30:38.504636 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" Dec 06 06:30:38 crc kubenswrapper[4823]: I1206 06:30:38.524401 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" podStartSLOduration=6.524379344 podStartE2EDuration="6.524379344s" podCreationTimestamp="2025-12-06 06:30:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:30:38.520380886 +0000 UTC m=+339.806132846" watchObservedRunningTime="2025-12-06 06:30:38.524379344 +0000 UTC m=+339.810131324" Dec 06 06:31:05 crc kubenswrapper[4823]: I1206 06:31:05.901587 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qnrgf"] Dec 06 06:31:05 crc kubenswrapper[4823]: I1206 06:31:05.902761 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:05 crc kubenswrapper[4823]: I1206 06:31:05.938335 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qnrgf"] Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.052208 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.052288 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.052407 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/50a10b62-cdb2-41d0-8481-64bd368d50d9-registry-certificates\") pod \"image-registry-66df7c8f76-qnrgf\" (UID: \"50a10b62-cdb2-41d0-8481-64bd368d50d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.052453 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/50a10b62-cdb2-41d0-8481-64bd368d50d9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qnrgf\" (UID: \"50a10b62-cdb2-41d0-8481-64bd368d50d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.052487 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/50a10b62-cdb2-41d0-8481-64bd368d50d9-registry-tls\") pod \"image-registry-66df7c8f76-qnrgf\" (UID: \"50a10b62-cdb2-41d0-8481-64bd368d50d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.052545 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/50a10b62-cdb2-41d0-8481-64bd368d50d9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qnrgf\" (UID: \"50a10b62-cdb2-41d0-8481-64bd368d50d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.052571 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j68tj\" (UniqueName: \"kubernetes.io/projected/50a10b62-cdb2-41d0-8481-64bd368d50d9-kube-api-access-j68tj\") pod \"image-registry-66df7c8f76-qnrgf\" (UID: \"50a10b62-cdb2-41d0-8481-64bd368d50d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.052693 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50a10b62-cdb2-41d0-8481-64bd368d50d9-bound-sa-token\") pod \"image-registry-66df7c8f76-qnrgf\" (UID: \"50a10b62-cdb2-41d0-8481-64bd368d50d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.052832 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50a10b62-cdb2-41d0-8481-64bd368d50d9-trusted-ca\") pod \"image-registry-66df7c8f76-qnrgf\" (UID: \"50a10b62-cdb2-41d0-8481-64bd368d50d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.052871 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qnrgf\" (UID: \"50a10b62-cdb2-41d0-8481-64bd368d50d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.086252 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qnrgf\" (UID: \"50a10b62-cdb2-41d0-8481-64bd368d50d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.154253 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/50a10b62-cdb2-41d0-8481-64bd368d50d9-registry-tls\") pod \"image-registry-66df7c8f76-qnrgf\" (UID: \"50a10b62-cdb2-41d0-8481-64bd368d50d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.154325 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/50a10b62-cdb2-41d0-8481-64bd368d50d9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qnrgf\" (UID: \"50a10b62-cdb2-41d0-8481-64bd368d50d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.154362 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j68tj\" (UniqueName: \"kubernetes.io/projected/50a10b62-cdb2-41d0-8481-64bd368d50d9-kube-api-access-j68tj\") pod \"image-registry-66df7c8f76-qnrgf\" (UID: \"50a10b62-cdb2-41d0-8481-64bd368d50d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.154384 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50a10b62-cdb2-41d0-8481-64bd368d50d9-bound-sa-token\") pod \"image-registry-66df7c8f76-qnrgf\" (UID: \"50a10b62-cdb2-41d0-8481-64bd368d50d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.154427 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50a10b62-cdb2-41d0-8481-64bd368d50d9-trusted-ca\") pod \"image-registry-66df7c8f76-qnrgf\" (UID: \"50a10b62-cdb2-41d0-8481-64bd368d50d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.154507 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/50a10b62-cdb2-41d0-8481-64bd368d50d9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qnrgf\" (UID: \"50a10b62-cdb2-41d0-8481-64bd368d50d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.154538 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/50a10b62-cdb2-41d0-8481-64bd368d50d9-registry-certificates\") pod \"image-registry-66df7c8f76-qnrgf\" (UID: \"50a10b62-cdb2-41d0-8481-64bd368d50d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.155433 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/50a10b62-cdb2-41d0-8481-64bd368d50d9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qnrgf\" (UID: \"50a10b62-cdb2-41d0-8481-64bd368d50d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.155980 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50a10b62-cdb2-41d0-8481-64bd368d50d9-trusted-ca\") pod \"image-registry-66df7c8f76-qnrgf\" (UID: \"50a10b62-cdb2-41d0-8481-64bd368d50d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.156004 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/50a10b62-cdb2-41d0-8481-64bd368d50d9-registry-certificates\") pod \"image-registry-66df7c8f76-qnrgf\" (UID: \"50a10b62-cdb2-41d0-8481-64bd368d50d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.160589 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/50a10b62-cdb2-41d0-8481-64bd368d50d9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qnrgf\" (UID: \"50a10b62-cdb2-41d0-8481-64bd368d50d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.160603 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/50a10b62-cdb2-41d0-8481-64bd368d50d9-registry-tls\") pod \"image-registry-66df7c8f76-qnrgf\" (UID: \"50a10b62-cdb2-41d0-8481-64bd368d50d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.172497 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j68tj\" (UniqueName: \"kubernetes.io/projected/50a10b62-cdb2-41d0-8481-64bd368d50d9-kube-api-access-j68tj\") pod \"image-registry-66df7c8f76-qnrgf\" (UID: \"50a10b62-cdb2-41d0-8481-64bd368d50d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.175503 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50a10b62-cdb2-41d0-8481-64bd368d50d9-bound-sa-token\") pod \"image-registry-66df7c8f76-qnrgf\" (UID: \"50a10b62-cdb2-41d0-8481-64bd368d50d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.223365 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:06 crc kubenswrapper[4823]: I1206 06:31:06.642538 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qnrgf"] Dec 06 06:31:07 crc kubenswrapper[4823]: I1206 06:31:07.654825 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" event={"ID":"50a10b62-cdb2-41d0-8481-64bd368d50d9","Type":"ContainerStarted","Data":"5f8f3f5fb5be60ba72db304c3e59be2bef94c4abd10b618c5b1323e635e92b7b"} Dec 06 06:31:07 crc kubenswrapper[4823]: I1206 06:31:07.654875 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" event={"ID":"50a10b62-cdb2-41d0-8481-64bd368d50d9","Type":"ContainerStarted","Data":"2b305c270ff8fc489edfac15b695587590d93546aba99f4c9cf78fe9843174ba"} Dec 06 06:31:07 crc kubenswrapper[4823]: I1206 06:31:07.656075 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:07 crc kubenswrapper[4823]: I1206 06:31:07.677605 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" podStartSLOduration=2.67758443 podStartE2EDuration="2.67758443s" podCreationTimestamp="2025-12-06 06:31:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:31:07.674804368 +0000 UTC m=+368.960556338" watchObservedRunningTime="2025-12-06 06:31:07.67758443 +0000 UTC m=+368.963336390" Dec 06 06:31:11 crc kubenswrapper[4823]: I1206 06:31:11.582133 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml"] Dec 06 06:31:11 crc kubenswrapper[4823]: I1206 06:31:11.583316 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" podUID="e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9" containerName="controller-manager" containerID="cri-o://be629ce6f8fec472986f684b6de0d6e0ab649f5ed6a3f72187e147131760af54" gracePeriod=30 Dec 06 06:31:11 crc kubenswrapper[4823]: I1206 06:31:11.602069 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7"] Dec 06 06:31:11 crc kubenswrapper[4823]: I1206 06:31:11.602549 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7" podUID="21b088be-f212-48ba-bf21-f5aebbd8a655" containerName="route-controller-manager" containerID="cri-o://5c6bedd8a552b8c9c2b81496b3d3615800618111f3d1ff7a3c6e9e34d4b985ef" gracePeriod=30 Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.104126 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.172177 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.235568 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21b088be-f212-48ba-bf21-f5aebbd8a655-serving-cert\") pod \"21b088be-f212-48ba-bf21-f5aebbd8a655\" (UID: \"21b088be-f212-48ba-bf21-f5aebbd8a655\") " Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.235624 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b088be-f212-48ba-bf21-f5aebbd8a655-config\") pod \"21b088be-f212-48ba-bf21-f5aebbd8a655\" (UID: \"21b088be-f212-48ba-bf21-f5aebbd8a655\") " Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.235723 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb8tg\" (UniqueName: \"kubernetes.io/projected/21b088be-f212-48ba-bf21-f5aebbd8a655-kube-api-access-wb8tg\") pod \"21b088be-f212-48ba-bf21-f5aebbd8a655\" (UID: \"21b088be-f212-48ba-bf21-f5aebbd8a655\") " Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.235794 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21b088be-f212-48ba-bf21-f5aebbd8a655-client-ca\") pod \"21b088be-f212-48ba-bf21-f5aebbd8a655\" (UID: \"21b088be-f212-48ba-bf21-f5aebbd8a655\") " Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.237150 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21b088be-f212-48ba-bf21-f5aebbd8a655-config" (OuterVolumeSpecName: "config") pod "21b088be-f212-48ba-bf21-f5aebbd8a655" (UID: "21b088be-f212-48ba-bf21-f5aebbd8a655"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.240219 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21b088be-f212-48ba-bf21-f5aebbd8a655-client-ca" (OuterVolumeSpecName: "client-ca") pod "21b088be-f212-48ba-bf21-f5aebbd8a655" (UID: "21b088be-f212-48ba-bf21-f5aebbd8a655"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.243247 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21b088be-f212-48ba-bf21-f5aebbd8a655-kube-api-access-wb8tg" (OuterVolumeSpecName: "kube-api-access-wb8tg") pod "21b088be-f212-48ba-bf21-f5aebbd8a655" (UID: "21b088be-f212-48ba-bf21-f5aebbd8a655"). InnerVolumeSpecName "kube-api-access-wb8tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.243526 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21b088be-f212-48ba-bf21-f5aebbd8a655-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "21b088be-f212-48ba-bf21-f5aebbd8a655" (UID: "21b088be-f212-48ba-bf21-f5aebbd8a655"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.337188 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-config\") pod \"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9\" (UID: \"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9\") " Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.337284 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv2rg\" (UniqueName: \"kubernetes.io/projected/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-kube-api-access-bv2rg\") pod \"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9\" (UID: \"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9\") " Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.337342 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-serving-cert\") pod \"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9\" (UID: \"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9\") " Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.337378 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-proxy-ca-bundles\") pod \"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9\" (UID: \"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9\") " Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.337396 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-client-ca\") pod \"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9\" (UID: \"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9\") " Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.337698 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21b088be-f212-48ba-bf21-f5aebbd8a655-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.337720 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b088be-f212-48ba-bf21-f5aebbd8a655-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.337733 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb8tg\" (UniqueName: \"kubernetes.io/projected/21b088be-f212-48ba-bf21-f5aebbd8a655-kube-api-access-wb8tg\") on node \"crc\" DevicePath \"\"" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.337747 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21b088be-f212-48ba-bf21-f5aebbd8a655-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.338427 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-client-ca" (OuterVolumeSpecName: "client-ca") pod "e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9" (UID: "e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.338538 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-config" (OuterVolumeSpecName: "config") pod "e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9" (UID: "e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.338563 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9" (UID: "e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.340378 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9" (UID: "e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.340416 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-kube-api-access-bv2rg" (OuterVolumeSpecName: "kube-api-access-bv2rg") pod "e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9" (UID: "e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9"). InnerVolumeSpecName "kube-api-access-bv2rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.432309 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xhxkq"] Dec 06 06:31:12 crc kubenswrapper[4823]: E1206 06:31:12.432583 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21b088be-f212-48ba-bf21-f5aebbd8a655" containerName="route-controller-manager" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.432599 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="21b088be-f212-48ba-bf21-f5aebbd8a655" containerName="route-controller-manager" Dec 06 06:31:12 crc kubenswrapper[4823]: E1206 06:31:12.432613 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9" containerName="controller-manager" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.432621 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9" containerName="controller-manager" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.432751 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9" containerName="controller-manager" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.432769 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="21b088be-f212-48ba-bf21-f5aebbd8a655" containerName="route-controller-manager" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.433606 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xhxkq" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.435434 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.439306 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.439344 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.439358 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.439372 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv2rg\" (UniqueName: \"kubernetes.io/projected/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-kube-api-access-bv2rg\") on node \"crc\" DevicePath \"\"" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.439386 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.443805 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xhxkq"] Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.540840 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m96wb\" (UniqueName: \"kubernetes.io/projected/93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d-kube-api-access-m96wb\") pod \"community-operators-xhxkq\" (UID: \"93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d\") " pod="openshift-marketplace/community-operators-xhxkq" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.541180 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d-utilities\") pod \"community-operators-xhxkq\" (UID: \"93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d\") " pod="openshift-marketplace/community-operators-xhxkq" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.541223 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d-catalog-content\") pod \"community-operators-xhxkq\" (UID: \"93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d\") " pod="openshift-marketplace/community-operators-xhxkq" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.634305 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kzxl6"] Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.635244 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kzxl6" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.637835 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.641923 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m96wb\" (UniqueName: \"kubernetes.io/projected/93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d-kube-api-access-m96wb\") pod \"community-operators-xhxkq\" (UID: \"93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d\") " pod="openshift-marketplace/community-operators-xhxkq" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.641969 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d-utilities\") pod \"community-operators-xhxkq\" (UID: \"93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d\") " pod="openshift-marketplace/community-operators-xhxkq" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.642013 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d-catalog-content\") pod \"community-operators-xhxkq\" (UID: \"93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d\") " pod="openshift-marketplace/community-operators-xhxkq" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.642548 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d-catalog-content\") pod \"community-operators-xhxkq\" (UID: \"93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d\") " pod="openshift-marketplace/community-operators-xhxkq" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.642789 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d-utilities\") pod \"community-operators-xhxkq\" (UID: \"93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d\") " pod="openshift-marketplace/community-operators-xhxkq" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.654370 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kzxl6"] Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.664787 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m96wb\" (UniqueName: \"kubernetes.io/projected/93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d-kube-api-access-m96wb\") pod \"community-operators-xhxkq\" (UID: \"93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d\") " pod="openshift-marketplace/community-operators-xhxkq" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.679868 4823 generic.go:334] "Generic (PLEG): container finished" podID="21b088be-f212-48ba-bf21-f5aebbd8a655" containerID="5c6bedd8a552b8c9c2b81496b3d3615800618111f3d1ff7a3c6e9e34d4b985ef" exitCode=0 Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.679935 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7" event={"ID":"21b088be-f212-48ba-bf21-f5aebbd8a655","Type":"ContainerDied","Data":"5c6bedd8a552b8c9c2b81496b3d3615800618111f3d1ff7a3c6e9e34d4b985ef"} Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.679964 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7" event={"ID":"21b088be-f212-48ba-bf21-f5aebbd8a655","Type":"ContainerDied","Data":"c7e44e861e8e220315000528de7e9d7c8f2c446a28a9f46637acec06ee0efd50"} Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.679982 4823 scope.go:117] "RemoveContainer" containerID="5c6bedd8a552b8c9c2b81496b3d3615800618111f3d1ff7a3c6e9e34d4b985ef" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.680085 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.683617 4823 generic.go:334] "Generic (PLEG): container finished" podID="e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9" containerID="be629ce6f8fec472986f684b6de0d6e0ab649f5ed6a3f72187e147131760af54" exitCode=0 Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.683655 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" event={"ID":"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9","Type":"ContainerDied","Data":"be629ce6f8fec472986f684b6de0d6e0ab649f5ed6a3f72187e147131760af54"} Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.683696 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" event={"ID":"e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9","Type":"ContainerDied","Data":"5652952ea0c2560d4e6b8f727706012b8f38649c48d25c74153ebef054df26fc"} Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.683710 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.699475 4823 scope.go:117] "RemoveContainer" containerID="5c6bedd8a552b8c9c2b81496b3d3615800618111f3d1ff7a3c6e9e34d4b985ef" Dec 06 06:31:12 crc kubenswrapper[4823]: E1206 06:31:12.699965 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c6bedd8a552b8c9c2b81496b3d3615800618111f3d1ff7a3c6e9e34d4b985ef\": container with ID starting with 5c6bedd8a552b8c9c2b81496b3d3615800618111f3d1ff7a3c6e9e34d4b985ef not found: ID does not exist" containerID="5c6bedd8a552b8c9c2b81496b3d3615800618111f3d1ff7a3c6e9e34d4b985ef" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.700008 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c6bedd8a552b8c9c2b81496b3d3615800618111f3d1ff7a3c6e9e34d4b985ef"} err="failed to get container status \"5c6bedd8a552b8c9c2b81496b3d3615800618111f3d1ff7a3c6e9e34d4b985ef\": rpc error: code = NotFound desc = could not find container \"5c6bedd8a552b8c9c2b81496b3d3615800618111f3d1ff7a3c6e9e34d4b985ef\": container with ID starting with 5c6bedd8a552b8c9c2b81496b3d3615800618111f3d1ff7a3c6e9e34d4b985ef not found: ID does not exist" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.700040 4823 scope.go:117] "RemoveContainer" containerID="be629ce6f8fec472986f684b6de0d6e0ab649f5ed6a3f72187e147131760af54" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.718473 4823 scope.go:117] "RemoveContainer" containerID="be629ce6f8fec472986f684b6de0d6e0ab649f5ed6a3f72187e147131760af54" Dec 06 06:31:12 crc kubenswrapper[4823]: E1206 06:31:12.718910 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be629ce6f8fec472986f684b6de0d6e0ab649f5ed6a3f72187e147131760af54\": container with ID starting with be629ce6f8fec472986f684b6de0d6e0ab649f5ed6a3f72187e147131760af54 not found: ID does not exist" containerID="be629ce6f8fec472986f684b6de0d6e0ab649f5ed6a3f72187e147131760af54" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.718960 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be629ce6f8fec472986f684b6de0d6e0ab649f5ed6a3f72187e147131760af54"} err="failed to get container status \"be629ce6f8fec472986f684b6de0d6e0ab649f5ed6a3f72187e147131760af54\": rpc error: code = NotFound desc = could not find container \"be629ce6f8fec472986f684b6de0d6e0ab649f5ed6a3f72187e147131760af54\": container with ID starting with be629ce6f8fec472986f684b6de0d6e0ab649f5ed6a3f72187e147131760af54 not found: ID does not exist" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.725556 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml"] Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.728958 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-66fdc6dc6c-vzzml"] Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.743724 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg5r5\" (UniqueName: \"kubernetes.io/projected/584a4234-6095-4bab-9af7-3ae474ac27e6-kube-api-access-hg5r5\") pod \"certified-operators-kzxl6\" (UID: \"584a4234-6095-4bab-9af7-3ae474ac27e6\") " pod="openshift-marketplace/certified-operators-kzxl6" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.743802 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584a4234-6095-4bab-9af7-3ae474ac27e6-utilities\") pod \"certified-operators-kzxl6\" (UID: \"584a4234-6095-4bab-9af7-3ae474ac27e6\") " pod="openshift-marketplace/certified-operators-kzxl6" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.743847 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584a4234-6095-4bab-9af7-3ae474ac27e6-catalog-content\") pod \"certified-operators-kzxl6\" (UID: \"584a4234-6095-4bab-9af7-3ae474ac27e6\") " pod="openshift-marketplace/certified-operators-kzxl6" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.748687 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7"] Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.752463 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xhxkq" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.752826 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cfddd64bd-qbdd7"] Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.845743 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584a4234-6095-4bab-9af7-3ae474ac27e6-catalog-content\") pod \"certified-operators-kzxl6\" (UID: \"584a4234-6095-4bab-9af7-3ae474ac27e6\") " pod="openshift-marketplace/certified-operators-kzxl6" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.845848 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg5r5\" (UniqueName: \"kubernetes.io/projected/584a4234-6095-4bab-9af7-3ae474ac27e6-kube-api-access-hg5r5\") pod \"certified-operators-kzxl6\" (UID: \"584a4234-6095-4bab-9af7-3ae474ac27e6\") " pod="openshift-marketplace/certified-operators-kzxl6" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.845901 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584a4234-6095-4bab-9af7-3ae474ac27e6-utilities\") pod \"certified-operators-kzxl6\" (UID: \"584a4234-6095-4bab-9af7-3ae474ac27e6\") " pod="openshift-marketplace/certified-operators-kzxl6" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.847190 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584a4234-6095-4bab-9af7-3ae474ac27e6-catalog-content\") pod \"certified-operators-kzxl6\" (UID: \"584a4234-6095-4bab-9af7-3ae474ac27e6\") " pod="openshift-marketplace/certified-operators-kzxl6" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.847729 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584a4234-6095-4bab-9af7-3ae474ac27e6-utilities\") pod \"certified-operators-kzxl6\" (UID: \"584a4234-6095-4bab-9af7-3ae474ac27e6\") " pod="openshift-marketplace/certified-operators-kzxl6" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.864996 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg5r5\" (UniqueName: \"kubernetes.io/projected/584a4234-6095-4bab-9af7-3ae474ac27e6-kube-api-access-hg5r5\") pod \"certified-operators-kzxl6\" (UID: \"584a4234-6095-4bab-9af7-3ae474ac27e6\") " pod="openshift-marketplace/certified-operators-kzxl6" Dec 06 06:31:12 crc kubenswrapper[4823]: I1206 06:31:12.950332 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kzxl6" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.151129 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21b088be-f212-48ba-bf21-f5aebbd8a655" path="/var/lib/kubelet/pods/21b088be-f212-48ba-bf21-f5aebbd8a655/volumes" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.154796 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9" path="/var/lib/kubelet/pods/e66aaaf4-ada3-4853-b57d-a4bfc27bbaf9/volumes" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.157779 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xhxkq"] Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.345885 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kzxl6"] Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.366550 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-568d55fbb8-pbvxl"] Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.367318 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-568d55fbb8-pbvxl" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.370429 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fcf58c46-kjgqj"] Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.371185 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-kjgqj" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.371261 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.371927 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.372032 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.372166 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.372390 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.372433 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.372745 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.373991 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.374435 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.374529 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.374631 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.374872 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.379099 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fcf58c46-kjgqj"] Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.382273 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-568d55fbb8-pbvxl"] Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.403548 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.453049 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5696f49-163d-4191-b918-bb4bb0b16862-proxy-ca-bundles\") pod \"controller-manager-568d55fbb8-pbvxl\" (UID: \"d5696f49-163d-4191-b918-bb4bb0b16862\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-pbvxl" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.453100 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5696f49-163d-4191-b918-bb4bb0b16862-serving-cert\") pod \"controller-manager-568d55fbb8-pbvxl\" (UID: \"d5696f49-163d-4191-b918-bb4bb0b16862\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-pbvxl" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.453125 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn2n4\" (UniqueName: \"kubernetes.io/projected/d5696f49-163d-4191-b918-bb4bb0b16862-kube-api-access-wn2n4\") pod \"controller-manager-568d55fbb8-pbvxl\" (UID: \"d5696f49-163d-4191-b918-bb4bb0b16862\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-pbvxl" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.453157 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5696f49-163d-4191-b918-bb4bb0b16862-config\") pod \"controller-manager-568d55fbb8-pbvxl\" (UID: \"d5696f49-163d-4191-b918-bb4bb0b16862\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-pbvxl" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.453194 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52b823c3-6f88-4761-a36e-155a28ab2134-config\") pod \"route-controller-manager-58fcf58c46-kjgqj\" (UID: \"52b823c3-6f88-4761-a36e-155a28ab2134\") " pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-kjgqj" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.453208 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl7fs\" (UniqueName: \"kubernetes.io/projected/52b823c3-6f88-4761-a36e-155a28ab2134-kube-api-access-wl7fs\") pod \"route-controller-manager-58fcf58c46-kjgqj\" (UID: \"52b823c3-6f88-4761-a36e-155a28ab2134\") " pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-kjgqj" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.453796 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5696f49-163d-4191-b918-bb4bb0b16862-client-ca\") pod \"controller-manager-568d55fbb8-pbvxl\" (UID: \"d5696f49-163d-4191-b918-bb4bb0b16862\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-pbvxl" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.453978 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52b823c3-6f88-4761-a36e-155a28ab2134-client-ca\") pod \"route-controller-manager-58fcf58c46-kjgqj\" (UID: \"52b823c3-6f88-4761-a36e-155a28ab2134\") " pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-kjgqj" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.454110 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52b823c3-6f88-4761-a36e-155a28ab2134-serving-cert\") pod \"route-controller-manager-58fcf58c46-kjgqj\" (UID: \"52b823c3-6f88-4761-a36e-155a28ab2134\") " pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-kjgqj" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.555693 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5696f49-163d-4191-b918-bb4bb0b16862-serving-cert\") pod \"controller-manager-568d55fbb8-pbvxl\" (UID: \"d5696f49-163d-4191-b918-bb4bb0b16862\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-pbvxl" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.555755 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn2n4\" (UniqueName: \"kubernetes.io/projected/d5696f49-163d-4191-b918-bb4bb0b16862-kube-api-access-wn2n4\") pod \"controller-manager-568d55fbb8-pbvxl\" (UID: \"d5696f49-163d-4191-b918-bb4bb0b16862\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-pbvxl" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.555786 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5696f49-163d-4191-b918-bb4bb0b16862-config\") pod \"controller-manager-568d55fbb8-pbvxl\" (UID: \"d5696f49-163d-4191-b918-bb4bb0b16862\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-pbvxl" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.555830 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52b823c3-6f88-4761-a36e-155a28ab2134-config\") pod \"route-controller-manager-58fcf58c46-kjgqj\" (UID: \"52b823c3-6f88-4761-a36e-155a28ab2134\") " pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-kjgqj" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.555845 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl7fs\" (UniqueName: \"kubernetes.io/projected/52b823c3-6f88-4761-a36e-155a28ab2134-kube-api-access-wl7fs\") pod \"route-controller-manager-58fcf58c46-kjgqj\" (UID: \"52b823c3-6f88-4761-a36e-155a28ab2134\") " pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-kjgqj" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.555871 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5696f49-163d-4191-b918-bb4bb0b16862-client-ca\") pod \"controller-manager-568d55fbb8-pbvxl\" (UID: \"d5696f49-163d-4191-b918-bb4bb0b16862\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-pbvxl" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.555892 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52b823c3-6f88-4761-a36e-155a28ab2134-client-ca\") pod \"route-controller-manager-58fcf58c46-kjgqj\" (UID: \"52b823c3-6f88-4761-a36e-155a28ab2134\") " pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-kjgqj" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.555917 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52b823c3-6f88-4761-a36e-155a28ab2134-serving-cert\") pod \"route-controller-manager-58fcf58c46-kjgqj\" (UID: \"52b823c3-6f88-4761-a36e-155a28ab2134\") " pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-kjgqj" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.555947 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5696f49-163d-4191-b918-bb4bb0b16862-proxy-ca-bundles\") pod \"controller-manager-568d55fbb8-pbvxl\" (UID: \"d5696f49-163d-4191-b918-bb4bb0b16862\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-pbvxl" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.557196 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5696f49-163d-4191-b918-bb4bb0b16862-proxy-ca-bundles\") pod \"controller-manager-568d55fbb8-pbvxl\" (UID: \"d5696f49-163d-4191-b918-bb4bb0b16862\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-pbvxl" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.557762 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5696f49-163d-4191-b918-bb4bb0b16862-client-ca\") pod \"controller-manager-568d55fbb8-pbvxl\" (UID: \"d5696f49-163d-4191-b918-bb4bb0b16862\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-pbvxl" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.557806 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5696f49-163d-4191-b918-bb4bb0b16862-config\") pod \"controller-manager-568d55fbb8-pbvxl\" (UID: \"d5696f49-163d-4191-b918-bb4bb0b16862\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-pbvxl" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.558552 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52b823c3-6f88-4761-a36e-155a28ab2134-client-ca\") pod \"route-controller-manager-58fcf58c46-kjgqj\" (UID: \"52b823c3-6f88-4761-a36e-155a28ab2134\") " pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-kjgqj" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.558998 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52b823c3-6f88-4761-a36e-155a28ab2134-config\") pod \"route-controller-manager-58fcf58c46-kjgqj\" (UID: \"52b823c3-6f88-4761-a36e-155a28ab2134\") " pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-kjgqj" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.561383 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5696f49-163d-4191-b918-bb4bb0b16862-serving-cert\") pod \"controller-manager-568d55fbb8-pbvxl\" (UID: \"d5696f49-163d-4191-b918-bb4bb0b16862\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-pbvxl" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.562234 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52b823c3-6f88-4761-a36e-155a28ab2134-serving-cert\") pod \"route-controller-manager-58fcf58c46-kjgqj\" (UID: \"52b823c3-6f88-4761-a36e-155a28ab2134\") " pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-kjgqj" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.573024 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl7fs\" (UniqueName: \"kubernetes.io/projected/52b823c3-6f88-4761-a36e-155a28ab2134-kube-api-access-wl7fs\") pod \"route-controller-manager-58fcf58c46-kjgqj\" (UID: \"52b823c3-6f88-4761-a36e-155a28ab2134\") " pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-kjgqj" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.576779 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn2n4\" (UniqueName: \"kubernetes.io/projected/d5696f49-163d-4191-b918-bb4bb0b16862-kube-api-access-wn2n4\") pod \"controller-manager-568d55fbb8-pbvxl\" (UID: \"d5696f49-163d-4191-b918-bb4bb0b16862\") " pod="openshift-controller-manager/controller-manager-568d55fbb8-pbvxl" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.702630 4823 generic.go:334] "Generic (PLEG): container finished" podID="93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d" containerID="faafbd989dbe20dc351162489e868eea874f2942cf9a052270c8ed4183cd1b64" exitCode=0 Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.702746 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhxkq" event={"ID":"93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d","Type":"ContainerDied","Data":"faafbd989dbe20dc351162489e868eea874f2942cf9a052270c8ed4183cd1b64"} Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.702779 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhxkq" event={"ID":"93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d","Type":"ContainerStarted","Data":"19fe9d7c2cfa41a9919d432a2216c5c5a8fe4de3187e79fd06e4bbf089025394"} Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.707583 4823 generic.go:334] "Generic (PLEG): container finished" podID="584a4234-6095-4bab-9af7-3ae474ac27e6" containerID="e1e8eb82ccfc75d9927fb34a0649de98b7ec7cae6d803b204f4615277508651d" exitCode=0 Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.707768 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzxl6" event={"ID":"584a4234-6095-4bab-9af7-3ae474ac27e6","Type":"ContainerDied","Data":"e1e8eb82ccfc75d9927fb34a0649de98b7ec7cae6d803b204f4615277508651d"} Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.707806 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzxl6" event={"ID":"584a4234-6095-4bab-9af7-3ae474ac27e6","Type":"ContainerStarted","Data":"90b76863c72eac8d214a11d29ae90340951b8d0230cf7149ff06b09ed3dd632e"} Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.719012 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-568d55fbb8-pbvxl" Dec 06 06:31:13 crc kubenswrapper[4823]: I1206 06:31:13.729440 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-kjgqj" Dec 06 06:31:14 crc kubenswrapper[4823]: I1206 06:31:14.132053 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-568d55fbb8-pbvxl"] Dec 06 06:31:14 crc kubenswrapper[4823]: W1206 06:31:14.138738 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5696f49_163d_4191_b918_bb4bb0b16862.slice/crio-9fb0d16ac5cce9a7f815cd7ee1fb3e0eeee505959fb29851ddac4e0b5710ab46 WatchSource:0}: Error finding container 9fb0d16ac5cce9a7f815cd7ee1fb3e0eeee505959fb29851ddac4e0b5710ab46: Status 404 returned error can't find the container with id 9fb0d16ac5cce9a7f815cd7ee1fb3e0eeee505959fb29851ddac4e0b5710ab46 Dec 06 06:31:14 crc kubenswrapper[4823]: I1206 06:31:14.185033 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fcf58c46-kjgqj"] Dec 06 06:31:14 crc kubenswrapper[4823]: W1206 06:31:14.190987 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52b823c3_6f88_4761_a36e_155a28ab2134.slice/crio-f2834386262db7e7a88b2092491e5a07dc7a82a9f4f4e243bf7d7e90d044918a WatchSource:0}: Error finding container f2834386262db7e7a88b2092491e5a07dc7a82a9f4f4e243bf7d7e90d044918a: Status 404 returned error can't find the container with id f2834386262db7e7a88b2092491e5a07dc7a82a9f4f4e243bf7d7e90d044918a Dec 06 06:31:14 crc kubenswrapper[4823]: I1206 06:31:14.721761 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-568d55fbb8-pbvxl" event={"ID":"d5696f49-163d-4191-b918-bb4bb0b16862","Type":"ContainerStarted","Data":"ca5c826c37e55718a810cc624fdac042d115d2cc9edebaf2abc500a3cd42f017"} Dec 06 06:31:14 crc kubenswrapper[4823]: I1206 06:31:14.722115 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-568d55fbb8-pbvxl" event={"ID":"d5696f49-163d-4191-b918-bb4bb0b16862","Type":"ContainerStarted","Data":"9fb0d16ac5cce9a7f815cd7ee1fb3e0eeee505959fb29851ddac4e0b5710ab46"} Dec 06 06:31:14 crc kubenswrapper[4823]: I1206 06:31:14.722139 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-568d55fbb8-pbvxl" Dec 06 06:31:14 crc kubenswrapper[4823]: I1206 06:31:14.730728 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-568d55fbb8-pbvxl" Dec 06 06:31:14 crc kubenswrapper[4823]: I1206 06:31:14.731574 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-kjgqj" event={"ID":"52b823c3-6f88-4761-a36e-155a28ab2134","Type":"ContainerStarted","Data":"26edd2324ae3d875bec2dd1f10e5ad6c87997067eae6c48afaa2e180aadf6db8"} Dec 06 06:31:14 crc kubenswrapper[4823]: I1206 06:31:14.731607 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-kjgqj" event={"ID":"52b823c3-6f88-4761-a36e-155a28ab2134","Type":"ContainerStarted","Data":"f2834386262db7e7a88b2092491e5a07dc7a82a9f4f4e243bf7d7e90d044918a"} Dec 06 06:31:14 crc kubenswrapper[4823]: I1206 06:31:14.732689 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-kjgqj" Dec 06 06:31:14 crc kubenswrapper[4823]: I1206 06:31:14.742125 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-kjgqj" Dec 06 06:31:14 crc kubenswrapper[4823]: I1206 06:31:14.754371 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-568d55fbb8-pbvxl" podStartSLOduration=3.75434813 podStartE2EDuration="3.75434813s" podCreationTimestamp="2025-12-06 06:31:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:31:14.748484316 +0000 UTC m=+376.034236276" watchObservedRunningTime="2025-12-06 06:31:14.75434813 +0000 UTC m=+376.040100090" Dec 06 06:31:14 crc kubenswrapper[4823]: I1206 06:31:14.784206 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-58fcf58c46-kjgqj" podStartSLOduration=3.784185875 podStartE2EDuration="3.784185875s" podCreationTimestamp="2025-12-06 06:31:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:31:14.783446043 +0000 UTC m=+376.069198003" watchObservedRunningTime="2025-12-06 06:31:14.784185875 +0000 UTC m=+376.069937845" Dec 06 06:31:14 crc kubenswrapper[4823]: I1206 06:31:14.834157 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jfh2s"] Dec 06 06:31:14 crc kubenswrapper[4823]: I1206 06:31:14.835571 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jfh2s" Dec 06 06:31:14 crc kubenswrapper[4823]: I1206 06:31:14.843989 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 06 06:31:14 crc kubenswrapper[4823]: I1206 06:31:14.857298 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jfh2s"] Dec 06 06:31:14 crc kubenswrapper[4823]: I1206 06:31:14.975317 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e447c74-d5a1-433a-bbb8-faa526e58597-utilities\") pod \"redhat-marketplace-jfh2s\" (UID: \"3e447c74-d5a1-433a-bbb8-faa526e58597\") " pod="openshift-marketplace/redhat-marketplace-jfh2s" Dec 06 06:31:14 crc kubenswrapper[4823]: I1206 06:31:14.975444 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk7jj\" (UniqueName: \"kubernetes.io/projected/3e447c74-d5a1-433a-bbb8-faa526e58597-kube-api-access-wk7jj\") pod \"redhat-marketplace-jfh2s\" (UID: \"3e447c74-d5a1-433a-bbb8-faa526e58597\") " pod="openshift-marketplace/redhat-marketplace-jfh2s" Dec 06 06:31:14 crc kubenswrapper[4823]: I1206 06:31:14.975471 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e447c74-d5a1-433a-bbb8-faa526e58597-catalog-content\") pod \"redhat-marketplace-jfh2s\" (UID: \"3e447c74-d5a1-433a-bbb8-faa526e58597\") " pod="openshift-marketplace/redhat-marketplace-jfh2s" Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.030883 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v426c"] Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.032115 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v426c" Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.034069 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.050289 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v426c"] Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.077162 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e447c74-d5a1-433a-bbb8-faa526e58597-utilities\") pod \"redhat-marketplace-jfh2s\" (UID: \"3e447c74-d5a1-433a-bbb8-faa526e58597\") " pod="openshift-marketplace/redhat-marketplace-jfh2s" Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.077268 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk7jj\" (UniqueName: \"kubernetes.io/projected/3e447c74-d5a1-433a-bbb8-faa526e58597-kube-api-access-wk7jj\") pod \"redhat-marketplace-jfh2s\" (UID: \"3e447c74-d5a1-433a-bbb8-faa526e58597\") " pod="openshift-marketplace/redhat-marketplace-jfh2s" Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.077305 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e447c74-d5a1-433a-bbb8-faa526e58597-catalog-content\") pod \"redhat-marketplace-jfh2s\" (UID: \"3e447c74-d5a1-433a-bbb8-faa526e58597\") " pod="openshift-marketplace/redhat-marketplace-jfh2s" Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.077792 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e447c74-d5a1-433a-bbb8-faa526e58597-utilities\") pod \"redhat-marketplace-jfh2s\" (UID: \"3e447c74-d5a1-433a-bbb8-faa526e58597\") " pod="openshift-marketplace/redhat-marketplace-jfh2s" Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.077816 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e447c74-d5a1-433a-bbb8-faa526e58597-catalog-content\") pod \"redhat-marketplace-jfh2s\" (UID: \"3e447c74-d5a1-433a-bbb8-faa526e58597\") " pod="openshift-marketplace/redhat-marketplace-jfh2s" Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.100097 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk7jj\" (UniqueName: \"kubernetes.io/projected/3e447c74-d5a1-433a-bbb8-faa526e58597-kube-api-access-wk7jj\") pod \"redhat-marketplace-jfh2s\" (UID: \"3e447c74-d5a1-433a-bbb8-faa526e58597\") " pod="openshift-marketplace/redhat-marketplace-jfh2s" Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.166615 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jfh2s" Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.179445 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f25daf46-c19d-4f30-b638-f1d1ffb22e99-catalog-content\") pod \"redhat-operators-v426c\" (UID: \"f25daf46-c19d-4f30-b638-f1d1ffb22e99\") " pod="openshift-marketplace/redhat-operators-v426c" Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.179520 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snzlj\" (UniqueName: \"kubernetes.io/projected/f25daf46-c19d-4f30-b638-f1d1ffb22e99-kube-api-access-snzlj\") pod \"redhat-operators-v426c\" (UID: \"f25daf46-c19d-4f30-b638-f1d1ffb22e99\") " pod="openshift-marketplace/redhat-operators-v426c" Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.179810 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f25daf46-c19d-4f30-b638-f1d1ffb22e99-utilities\") pod \"redhat-operators-v426c\" (UID: \"f25daf46-c19d-4f30-b638-f1d1ffb22e99\") " pod="openshift-marketplace/redhat-operators-v426c" Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.286243 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f25daf46-c19d-4f30-b638-f1d1ffb22e99-utilities\") pod \"redhat-operators-v426c\" (UID: \"f25daf46-c19d-4f30-b638-f1d1ffb22e99\") " pod="openshift-marketplace/redhat-operators-v426c" Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.286783 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f25daf46-c19d-4f30-b638-f1d1ffb22e99-catalog-content\") pod \"redhat-operators-v426c\" (UID: \"f25daf46-c19d-4f30-b638-f1d1ffb22e99\") " pod="openshift-marketplace/redhat-operators-v426c" Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.286832 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snzlj\" (UniqueName: \"kubernetes.io/projected/f25daf46-c19d-4f30-b638-f1d1ffb22e99-kube-api-access-snzlj\") pod \"redhat-operators-v426c\" (UID: \"f25daf46-c19d-4f30-b638-f1d1ffb22e99\") " pod="openshift-marketplace/redhat-operators-v426c" Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.287227 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f25daf46-c19d-4f30-b638-f1d1ffb22e99-catalog-content\") pod \"redhat-operators-v426c\" (UID: \"f25daf46-c19d-4f30-b638-f1d1ffb22e99\") " pod="openshift-marketplace/redhat-operators-v426c" Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.287822 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f25daf46-c19d-4f30-b638-f1d1ffb22e99-utilities\") pod \"redhat-operators-v426c\" (UID: \"f25daf46-c19d-4f30-b638-f1d1ffb22e99\") " pod="openshift-marketplace/redhat-operators-v426c" Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.313903 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snzlj\" (UniqueName: \"kubernetes.io/projected/f25daf46-c19d-4f30-b638-f1d1ffb22e99-kube-api-access-snzlj\") pod \"redhat-operators-v426c\" (UID: \"f25daf46-c19d-4f30-b638-f1d1ffb22e99\") " pod="openshift-marketplace/redhat-operators-v426c" Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.348051 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v426c" Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.622246 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jfh2s"] Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.742462 4823 generic.go:334] "Generic (PLEG): container finished" podID="93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d" containerID="cbf74f8a48f468233aab05e5769fd2c4c76ed2c1ac76e1bdbaaff2042b66e36f" exitCode=0 Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.742573 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhxkq" event={"ID":"93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d","Type":"ContainerDied","Data":"cbf74f8a48f468233aab05e5769fd2c4c76ed2c1ac76e1bdbaaff2042b66e36f"} Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.749582 4823 generic.go:334] "Generic (PLEG): container finished" podID="584a4234-6095-4bab-9af7-3ae474ac27e6" containerID="e20047ecd0c3177eb6ee7c8d970b653f9eb7ae349ecf8f886eccfb0f7e2ba773" exitCode=0 Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.749696 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzxl6" event={"ID":"584a4234-6095-4bab-9af7-3ae474ac27e6","Type":"ContainerDied","Data":"e20047ecd0c3177eb6ee7c8d970b653f9eb7ae349ecf8f886eccfb0f7e2ba773"} Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.752587 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jfh2s" event={"ID":"3e447c74-d5a1-433a-bbb8-faa526e58597","Type":"ContainerStarted","Data":"0bdeeea11fb80753754a10ec1ec9847ad6d40e89ea82b73ce64f433e44bffebf"} Dec 06 06:31:15 crc kubenswrapper[4823]: I1206 06:31:15.783358 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v426c"] Dec 06 06:31:16 crc kubenswrapper[4823]: I1206 06:31:16.765558 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhxkq" event={"ID":"93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d","Type":"ContainerStarted","Data":"c2a927349ea4f8b2770e4c9a27f37c7f0dd45e0f139f96782f0a8b994dbc0ea1"} Dec 06 06:31:16 crc kubenswrapper[4823]: I1206 06:31:16.770647 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzxl6" event={"ID":"584a4234-6095-4bab-9af7-3ae474ac27e6","Type":"ContainerStarted","Data":"0f63401b52ad9af3a8b3876a717633de373809ced8c0ef1323d82f508d9bdd9e"} Dec 06 06:31:16 crc kubenswrapper[4823]: I1206 06:31:16.772767 4823 generic.go:334] "Generic (PLEG): container finished" podID="3e447c74-d5a1-433a-bbb8-faa526e58597" containerID="4b75b27c7706a3a7fa2fe39ca05d98bb4691fb1504949bec335e7f615010e304" exitCode=0 Dec 06 06:31:16 crc kubenswrapper[4823]: I1206 06:31:16.772902 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jfh2s" event={"ID":"3e447c74-d5a1-433a-bbb8-faa526e58597","Type":"ContainerDied","Data":"4b75b27c7706a3a7fa2fe39ca05d98bb4691fb1504949bec335e7f615010e304"} Dec 06 06:31:16 crc kubenswrapper[4823]: I1206 06:31:16.774546 4823 generic.go:334] "Generic (PLEG): container finished" podID="f25daf46-c19d-4f30-b638-f1d1ffb22e99" containerID="497142b553055186c731e8b1c142ff3962366f6ec9cd347398f62ae42b375148" exitCode=0 Dec 06 06:31:16 crc kubenswrapper[4823]: I1206 06:31:16.775526 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v426c" event={"ID":"f25daf46-c19d-4f30-b638-f1d1ffb22e99","Type":"ContainerDied","Data":"497142b553055186c731e8b1c142ff3962366f6ec9cd347398f62ae42b375148"} Dec 06 06:31:16 crc kubenswrapper[4823]: I1206 06:31:16.775565 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v426c" event={"ID":"f25daf46-c19d-4f30-b638-f1d1ffb22e99","Type":"ContainerStarted","Data":"e74082c79c7a4852ab2d0cbc82c9e232daf2844972825d11597d6fd1c3e17f20"} Dec 06 06:31:16 crc kubenswrapper[4823]: I1206 06:31:16.794463 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xhxkq" podStartSLOduration=2.320061778 podStartE2EDuration="4.794437736s" podCreationTimestamp="2025-12-06 06:31:12 +0000 UTC" firstStartedPulling="2025-12-06 06:31:13.704610365 +0000 UTC m=+374.990362325" lastFinishedPulling="2025-12-06 06:31:16.178986323 +0000 UTC m=+377.464738283" observedRunningTime="2025-12-06 06:31:16.789206401 +0000 UTC m=+378.074958361" watchObservedRunningTime="2025-12-06 06:31:16.794437736 +0000 UTC m=+378.080189696" Dec 06 06:31:16 crc kubenswrapper[4823]: I1206 06:31:16.826855 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kzxl6" podStartSLOduration=2.37606603 podStartE2EDuration="4.826839297s" podCreationTimestamp="2025-12-06 06:31:12 +0000 UTC" firstStartedPulling="2025-12-06 06:31:13.709336466 +0000 UTC m=+374.995088426" lastFinishedPulling="2025-12-06 06:31:16.160109733 +0000 UTC m=+377.445861693" observedRunningTime="2025-12-06 06:31:16.824381374 +0000 UTC m=+378.110133354" watchObservedRunningTime="2025-12-06 06:31:16.826839297 +0000 UTC m=+378.112591257" Dec 06 06:31:22 crc kubenswrapper[4823]: I1206 06:31:22.753001 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xhxkq" Dec 06 06:31:22 crc kubenswrapper[4823]: I1206 06:31:22.753371 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xhxkq" Dec 06 06:31:22 crc kubenswrapper[4823]: I1206 06:31:22.796419 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xhxkq" Dec 06 06:31:22 crc kubenswrapper[4823]: I1206 06:31:22.806604 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v426c" event={"ID":"f25daf46-c19d-4f30-b638-f1d1ffb22e99","Type":"ContainerStarted","Data":"346c80038413569d12e939c0f9ac5f07fec070bdd187ad53dd1bc20656f3472a"} Dec 06 06:31:22 crc kubenswrapper[4823]: I1206 06:31:22.808294 4823 generic.go:334] "Generic (PLEG): container finished" podID="3e447c74-d5a1-433a-bbb8-faa526e58597" containerID="49e780f79b971e093892fd9f4c23172fd36551852ae4afc20956e5a618ebac85" exitCode=0 Dec 06 06:31:22 crc kubenswrapper[4823]: I1206 06:31:22.808367 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jfh2s" event={"ID":"3e447c74-d5a1-433a-bbb8-faa526e58597","Type":"ContainerDied","Data":"49e780f79b971e093892fd9f4c23172fd36551852ae4afc20956e5a618ebac85"} Dec 06 06:31:22 crc kubenswrapper[4823]: I1206 06:31:22.850409 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xhxkq" Dec 06 06:31:22 crc kubenswrapper[4823]: I1206 06:31:22.951514 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kzxl6" Dec 06 06:31:22 crc kubenswrapper[4823]: I1206 06:31:22.952493 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kzxl6" Dec 06 06:31:22 crc kubenswrapper[4823]: I1206 06:31:22.999601 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kzxl6" Dec 06 06:31:23 crc kubenswrapper[4823]: I1206 06:31:23.815589 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jfh2s" event={"ID":"3e447c74-d5a1-433a-bbb8-faa526e58597","Type":"ContainerStarted","Data":"9f8cb0b08438b28fd8137cbfdadfdb9b678aa177a0e1fa928008f6213649dfed"} Dec 06 06:31:23 crc kubenswrapper[4823]: I1206 06:31:23.818426 4823 generic.go:334] "Generic (PLEG): container finished" podID="f25daf46-c19d-4f30-b638-f1d1ffb22e99" containerID="346c80038413569d12e939c0f9ac5f07fec070bdd187ad53dd1bc20656f3472a" exitCode=0 Dec 06 06:31:23 crc kubenswrapper[4823]: I1206 06:31:23.819023 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v426c" event={"ID":"f25daf46-c19d-4f30-b638-f1d1ffb22e99","Type":"ContainerDied","Data":"346c80038413569d12e939c0f9ac5f07fec070bdd187ad53dd1bc20656f3472a"} Dec 06 06:31:23 crc kubenswrapper[4823]: I1206 06:31:23.834609 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jfh2s" podStartSLOduration=3.320012423 podStartE2EDuration="9.834596239s" podCreationTimestamp="2025-12-06 06:31:14 +0000 UTC" firstStartedPulling="2025-12-06 06:31:16.775295378 +0000 UTC m=+378.061047338" lastFinishedPulling="2025-12-06 06:31:23.289879194 +0000 UTC m=+384.575631154" observedRunningTime="2025-12-06 06:31:23.831584169 +0000 UTC m=+385.117336129" watchObservedRunningTime="2025-12-06 06:31:23.834596239 +0000 UTC m=+385.120348189" Dec 06 06:31:23 crc kubenswrapper[4823]: I1206 06:31:23.870903 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kzxl6" Dec 06 06:31:25 crc kubenswrapper[4823]: I1206 06:31:25.167062 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jfh2s" Dec 06 06:31:25 crc kubenswrapper[4823]: I1206 06:31:25.167120 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jfh2s" Dec 06 06:31:25 crc kubenswrapper[4823]: I1206 06:31:25.208058 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jfh2s" Dec 06 06:31:25 crc kubenswrapper[4823]: I1206 06:31:25.830635 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v426c" event={"ID":"f25daf46-c19d-4f30-b638-f1d1ffb22e99","Type":"ContainerStarted","Data":"c4e1adb714730356ddc2129cfca37f9860b091b155ea9b3c5e7d1f2e598d8ca0"} Dec 06 06:31:25 crc kubenswrapper[4823]: I1206 06:31:25.851738 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v426c" podStartSLOduration=3.2251842809999998 podStartE2EDuration="10.851720015s" podCreationTimestamp="2025-12-06 06:31:15 +0000 UTC" firstStartedPulling="2025-12-06 06:31:16.777115312 +0000 UTC m=+378.062867272" lastFinishedPulling="2025-12-06 06:31:24.403651046 +0000 UTC m=+385.689403006" observedRunningTime="2025-12-06 06:31:25.849106007 +0000 UTC m=+387.134857957" watchObservedRunningTime="2025-12-06 06:31:25.851720015 +0000 UTC m=+387.137471975" Dec 06 06:31:26 crc kubenswrapper[4823]: I1206 06:31:26.229737 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-qnrgf" Dec 06 06:31:26 crc kubenswrapper[4823]: I1206 06:31:26.281059 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rb79w"] Dec 06 06:31:35 crc kubenswrapper[4823]: I1206 06:31:35.212638 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jfh2s" Dec 06 06:31:35 crc kubenswrapper[4823]: I1206 06:31:35.349006 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v426c" Dec 06 06:31:35 crc kubenswrapper[4823]: I1206 06:31:35.349415 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v426c" Dec 06 06:31:35 crc kubenswrapper[4823]: I1206 06:31:35.386768 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v426c" Dec 06 06:31:35 crc kubenswrapper[4823]: I1206 06:31:35.919021 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v426c" Dec 06 06:31:36 crc kubenswrapper[4823]: I1206 06:31:36.051717 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:31:36 crc kubenswrapper[4823]: I1206 06:31:36.051776 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.318199 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" podUID="3f369975-6444-44d7-b85f-290ec604b172" containerName="registry" containerID="cri-o://7d415b5dc1af211d8c3cc10228141e39374542e4eb39178f3cb67106aa09b928" gracePeriod=30 Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.722294 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.796330 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7mh4\" (UniqueName: \"kubernetes.io/projected/3f369975-6444-44d7-b85f-290ec604b172-kube-api-access-f7mh4\") pod \"3f369975-6444-44d7-b85f-290ec604b172\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.796388 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3f369975-6444-44d7-b85f-290ec604b172-registry-tls\") pod \"3f369975-6444-44d7-b85f-290ec604b172\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.796447 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3f369975-6444-44d7-b85f-290ec604b172-installation-pull-secrets\") pod \"3f369975-6444-44d7-b85f-290ec604b172\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.796485 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f369975-6444-44d7-b85f-290ec604b172-trusted-ca\") pod \"3f369975-6444-44d7-b85f-290ec604b172\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.796558 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3f369975-6444-44d7-b85f-290ec604b172-ca-trust-extracted\") pod \"3f369975-6444-44d7-b85f-290ec604b172\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.796585 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3f369975-6444-44d7-b85f-290ec604b172-registry-certificates\") pod \"3f369975-6444-44d7-b85f-290ec604b172\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.796627 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3f369975-6444-44d7-b85f-290ec604b172-bound-sa-token\") pod \"3f369975-6444-44d7-b85f-290ec604b172\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.796834 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"3f369975-6444-44d7-b85f-290ec604b172\" (UID: \"3f369975-6444-44d7-b85f-290ec604b172\") " Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.797554 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f369975-6444-44d7-b85f-290ec604b172-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "3f369975-6444-44d7-b85f-290ec604b172" (UID: "3f369975-6444-44d7-b85f-290ec604b172"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.805126 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f369975-6444-44d7-b85f-290ec604b172-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "3f369975-6444-44d7-b85f-290ec604b172" (UID: "3f369975-6444-44d7-b85f-290ec604b172"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.805204 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f369975-6444-44d7-b85f-290ec604b172-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "3f369975-6444-44d7-b85f-290ec604b172" (UID: "3f369975-6444-44d7-b85f-290ec604b172"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.805388 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "3f369975-6444-44d7-b85f-290ec604b172" (UID: "3f369975-6444-44d7-b85f-290ec604b172"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.805405 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f369975-6444-44d7-b85f-290ec604b172-kube-api-access-f7mh4" (OuterVolumeSpecName: "kube-api-access-f7mh4") pod "3f369975-6444-44d7-b85f-290ec604b172" (UID: "3f369975-6444-44d7-b85f-290ec604b172"). InnerVolumeSpecName "kube-api-access-f7mh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.805576 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f369975-6444-44d7-b85f-290ec604b172-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "3f369975-6444-44d7-b85f-290ec604b172" (UID: "3f369975-6444-44d7-b85f-290ec604b172"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.819291 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f369975-6444-44d7-b85f-290ec604b172-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "3f369975-6444-44d7-b85f-290ec604b172" (UID: "3f369975-6444-44d7-b85f-290ec604b172"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.822881 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f369975-6444-44d7-b85f-290ec604b172-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "3f369975-6444-44d7-b85f-290ec604b172" (UID: "3f369975-6444-44d7-b85f-290ec604b172"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.898573 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7mh4\" (UniqueName: \"kubernetes.io/projected/3f369975-6444-44d7-b85f-290ec604b172-kube-api-access-f7mh4\") on node \"crc\" DevicePath \"\"" Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.898630 4823 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3f369975-6444-44d7-b85f-290ec604b172-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.898648 4823 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3f369975-6444-44d7-b85f-290ec604b172-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.898704 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f369975-6444-44d7-b85f-290ec604b172-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.898718 4823 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3f369975-6444-44d7-b85f-290ec604b172-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.898728 4823 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3f369975-6444-44d7-b85f-290ec604b172-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.898739 4823 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3f369975-6444-44d7-b85f-290ec604b172-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.978846 4823 generic.go:334] "Generic (PLEG): container finished" podID="3f369975-6444-44d7-b85f-290ec604b172" containerID="7d415b5dc1af211d8c3cc10228141e39374542e4eb39178f3cb67106aa09b928" exitCode=0 Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.978900 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.978917 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" event={"ID":"3f369975-6444-44d7-b85f-290ec604b172","Type":"ContainerDied","Data":"7d415b5dc1af211d8c3cc10228141e39374542e4eb39178f3cb67106aa09b928"} Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.979307 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rb79w" event={"ID":"3f369975-6444-44d7-b85f-290ec604b172","Type":"ContainerDied","Data":"78d8ade9f0610a386aafb1d9e8f8c70266a3e0c37a1f96824c29e2e367e040e8"} Dec 06 06:31:51 crc kubenswrapper[4823]: I1206 06:31:51.979326 4823 scope.go:117] "RemoveContainer" containerID="7d415b5dc1af211d8c3cc10228141e39374542e4eb39178f3cb67106aa09b928" Dec 06 06:31:52 crc kubenswrapper[4823]: I1206 06:31:52.007505 4823 scope.go:117] "RemoveContainer" containerID="7d415b5dc1af211d8c3cc10228141e39374542e4eb39178f3cb67106aa09b928" Dec 06 06:31:52 crc kubenswrapper[4823]: E1206 06:31:52.008177 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d415b5dc1af211d8c3cc10228141e39374542e4eb39178f3cb67106aa09b928\": container with ID starting with 7d415b5dc1af211d8c3cc10228141e39374542e4eb39178f3cb67106aa09b928 not found: ID does not exist" containerID="7d415b5dc1af211d8c3cc10228141e39374542e4eb39178f3cb67106aa09b928" Dec 06 06:31:52 crc kubenswrapper[4823]: I1206 06:31:52.008236 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d415b5dc1af211d8c3cc10228141e39374542e4eb39178f3cb67106aa09b928"} err="failed to get container status \"7d415b5dc1af211d8c3cc10228141e39374542e4eb39178f3cb67106aa09b928\": rpc error: code = NotFound desc = could not find container \"7d415b5dc1af211d8c3cc10228141e39374542e4eb39178f3cb67106aa09b928\": container with ID starting with 7d415b5dc1af211d8c3cc10228141e39374542e4eb39178f3cb67106aa09b928 not found: ID does not exist" Dec 06 06:31:52 crc kubenswrapper[4823]: I1206 06:31:52.012993 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rb79w"] Dec 06 06:31:52 crc kubenswrapper[4823]: I1206 06:31:52.025950 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rb79w"] Dec 06 06:31:53 crc kubenswrapper[4823]: I1206 06:31:53.148900 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f369975-6444-44d7-b85f-290ec604b172" path="/var/lib/kubelet/pods/3f369975-6444-44d7-b85f-290ec604b172/volumes" Dec 06 06:31:59 crc kubenswrapper[4823]: I1206 06:31:59.304101 4823 scope.go:117] "RemoveContainer" containerID="01d96e4e5f01747ae9a0c77f9ae4c8ecd4c71efe6d92c240ca8a0e2e5ffa4af0" Dec 06 06:31:59 crc kubenswrapper[4823]: I1206 06:31:59.320727 4823 scope.go:117] "RemoveContainer" containerID="7437e816695c4ed74050c6a0a13d327a73a1c0f1104188b9d6d2c6d7cdf55c0d" Dec 06 06:31:59 crc kubenswrapper[4823]: I1206 06:31:59.336272 4823 scope.go:117] "RemoveContainer" containerID="a7a52a082806b2572d1dc43001aa243da1b6f7716a4dde4cdd7d860ddeba7104" Dec 06 06:31:59 crc kubenswrapper[4823]: I1206 06:31:59.350061 4823 scope.go:117] "RemoveContainer" containerID="c66e8039eb565560b625225d44c1a56a3de3892977f428e78e2a7cd7de6a61d8" Dec 06 06:31:59 crc kubenswrapper[4823]: I1206 06:31:59.360533 4823 scope.go:117] "RemoveContainer" containerID="8376ce9c057a3eed102135715a6977b9452e9b2e13b06c20548584f9608f7f66" Dec 06 06:31:59 crc kubenswrapper[4823]: I1206 06:31:59.377966 4823 scope.go:117] "RemoveContainer" containerID="d0b2b4254eb4817df888029e2723397582348e2a5b9b0fa077c18a4903de04af" Dec 06 06:32:06 crc kubenswrapper[4823]: I1206 06:32:06.052373 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:32:06 crc kubenswrapper[4823]: I1206 06:32:06.053076 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:32:06 crc kubenswrapper[4823]: I1206 06:32:06.053134 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 06:32:06 crc kubenswrapper[4823]: I1206 06:32:06.053934 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ccae4427dbcfd162a392c0f60c728a29ff44263d7626709954156668dc178c3"} pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 06:32:06 crc kubenswrapper[4823]: I1206 06:32:06.054023 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" containerID="cri-o://3ccae4427dbcfd162a392c0f60c728a29ff44263d7626709954156668dc178c3" gracePeriod=600 Dec 06 06:32:07 crc kubenswrapper[4823]: I1206 06:32:07.070922 4823 generic.go:334] "Generic (PLEG): container finished" podID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerID="3ccae4427dbcfd162a392c0f60c728a29ff44263d7626709954156668dc178c3" exitCode=0 Dec 06 06:32:07 crc kubenswrapper[4823]: I1206 06:32:07.071016 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerDied","Data":"3ccae4427dbcfd162a392c0f60c728a29ff44263d7626709954156668dc178c3"} Dec 06 06:32:07 crc kubenswrapper[4823]: I1206 06:32:07.072306 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerStarted","Data":"6548b8fc0740e8ab287d9661d80c7359a1550d44b0eb944832e91861fa69169a"} Dec 06 06:32:07 crc kubenswrapper[4823]: I1206 06:32:07.072385 4823 scope.go:117] "RemoveContainer" containerID="4e08566862e96572f68503de043e9cde31a3442a007512e19da8dc47189d427b" Dec 06 06:34:06 crc kubenswrapper[4823]: I1206 06:34:06.052282 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:34:06 crc kubenswrapper[4823]: I1206 06:34:06.052893 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:34:36 crc kubenswrapper[4823]: I1206 06:34:36.051783 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:34:36 crc kubenswrapper[4823]: I1206 06:34:36.052373 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:35:06 crc kubenswrapper[4823]: I1206 06:35:06.052405 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:35:06 crc kubenswrapper[4823]: I1206 06:35:06.053097 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:35:06 crc kubenswrapper[4823]: I1206 06:35:06.053149 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 06:35:06 crc kubenswrapper[4823]: I1206 06:35:06.053764 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6548b8fc0740e8ab287d9661d80c7359a1550d44b0eb944832e91861fa69169a"} pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 06:35:06 crc kubenswrapper[4823]: I1206 06:35:06.053811 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" containerID="cri-o://6548b8fc0740e8ab287d9661d80c7359a1550d44b0eb944832e91861fa69169a" gracePeriod=600 Dec 06 06:35:07 crc kubenswrapper[4823]: I1206 06:35:07.106539 4823 generic.go:334] "Generic (PLEG): container finished" podID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerID="6548b8fc0740e8ab287d9661d80c7359a1550d44b0eb944832e91861fa69169a" exitCode=0 Dec 06 06:35:07 crc kubenswrapper[4823]: I1206 06:35:07.106611 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerDied","Data":"6548b8fc0740e8ab287d9661d80c7359a1550d44b0eb944832e91861fa69169a"} Dec 06 06:35:07 crc kubenswrapper[4823]: I1206 06:35:07.106894 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerStarted","Data":"5eadb100f9de392020e8ad9c0c80f79bb4ee89b08a0b99aaf32660b2052224b2"} Dec 06 06:35:07 crc kubenswrapper[4823]: I1206 06:35:07.106914 4823 scope.go:117] "RemoveContainer" containerID="3ccae4427dbcfd162a392c0f60c728a29ff44263d7626709954156668dc178c3" Dec 06 06:35:54 crc kubenswrapper[4823]: I1206 06:35:54.972199 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-sg6d6"] Dec 06 06:35:54 crc kubenswrapper[4823]: E1206 06:35:54.972850 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f369975-6444-44d7-b85f-290ec604b172" containerName="registry" Dec 06 06:35:54 crc kubenswrapper[4823]: I1206 06:35:54.972862 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f369975-6444-44d7-b85f-290ec604b172" containerName="registry" Dec 06 06:35:54 crc kubenswrapper[4823]: I1206 06:35:54.972952 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f369975-6444-44d7-b85f-290ec604b172" containerName="registry" Dec 06 06:35:54 crc kubenswrapper[4823]: I1206 06:35:54.973316 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-sg6d6" Dec 06 06:35:54 crc kubenswrapper[4823]: I1206 06:35:54.974932 4823 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-zx7w7" Dec 06 06:35:54 crc kubenswrapper[4823]: I1206 06:35:54.975629 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 06 06:35:54 crc kubenswrapper[4823]: I1206 06:35:54.975770 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 06 06:35:54 crc kubenswrapper[4823]: I1206 06:35:54.981024 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-sg6d6"] Dec 06 06:35:54 crc kubenswrapper[4823]: I1206 06:35:54.992774 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mchr9\" (UniqueName: \"kubernetes.io/projected/8633755c-f571-4f49-bb10-a2ce86967ce6-kube-api-access-mchr9\") pod \"cert-manager-cainjector-7f985d654d-sg6d6\" (UID: \"8633755c-f571-4f49-bb10-a2ce86967ce6\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-sg6d6" Dec 06 06:35:55 crc kubenswrapper[4823]: I1206 06:35:55.000320 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-l8zkg"] Dec 06 06:35:55 crc kubenswrapper[4823]: I1206 06:35:55.001050 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-l8zkg" Dec 06 06:35:55 crc kubenswrapper[4823]: I1206 06:35:55.004419 4823 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-rxsv8" Dec 06 06:35:55 crc kubenswrapper[4823]: I1206 06:35:55.008970 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-gzrvf"] Dec 06 06:35:55 crc kubenswrapper[4823]: I1206 06:35:55.009581 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-gzrvf" Dec 06 06:35:55 crc kubenswrapper[4823]: I1206 06:35:55.015258 4823 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-xqkpd" Dec 06 06:35:55 crc kubenswrapper[4823]: I1206 06:35:55.020141 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-l8zkg"] Dec 06 06:35:55 crc kubenswrapper[4823]: I1206 06:35:55.024733 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-gzrvf"] Dec 06 06:35:55 crc kubenswrapper[4823]: I1206 06:35:55.093624 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mchr9\" (UniqueName: \"kubernetes.io/projected/8633755c-f571-4f49-bb10-a2ce86967ce6-kube-api-access-mchr9\") pod \"cert-manager-cainjector-7f985d654d-sg6d6\" (UID: \"8633755c-f571-4f49-bb10-a2ce86967ce6\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-sg6d6" Dec 06 06:35:55 crc kubenswrapper[4823]: I1206 06:35:55.093704 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxpkv\" (UniqueName: \"kubernetes.io/projected/16f9d7d8-2452-47c9-ad9a-468a067e74bc-kube-api-access-rxpkv\") pod \"cert-manager-5b446d88c5-gzrvf\" (UID: \"16f9d7d8-2452-47c9-ad9a-468a067e74bc\") " pod="cert-manager/cert-manager-5b446d88c5-gzrvf" Dec 06 06:35:55 crc kubenswrapper[4823]: I1206 06:35:55.093750 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5wmj\" (UniqueName: \"kubernetes.io/projected/7e6a87fb-3fc3-426b-b8fc-3bec076c5544-kube-api-access-h5wmj\") pod \"cert-manager-webhook-5655c58dd6-l8zkg\" (UID: \"7e6a87fb-3fc3-426b-b8fc-3bec076c5544\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-l8zkg" Dec 06 06:35:55 crc kubenswrapper[4823]: I1206 06:35:55.113891 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mchr9\" (UniqueName: \"kubernetes.io/projected/8633755c-f571-4f49-bb10-a2ce86967ce6-kube-api-access-mchr9\") pod \"cert-manager-cainjector-7f985d654d-sg6d6\" (UID: \"8633755c-f571-4f49-bb10-a2ce86967ce6\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-sg6d6" Dec 06 06:35:55 crc kubenswrapper[4823]: I1206 06:35:55.194916 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxpkv\" (UniqueName: \"kubernetes.io/projected/16f9d7d8-2452-47c9-ad9a-468a067e74bc-kube-api-access-rxpkv\") pod \"cert-manager-5b446d88c5-gzrvf\" (UID: \"16f9d7d8-2452-47c9-ad9a-468a067e74bc\") " pod="cert-manager/cert-manager-5b446d88c5-gzrvf" Dec 06 06:35:55 crc kubenswrapper[4823]: I1206 06:35:55.195050 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5wmj\" (UniqueName: \"kubernetes.io/projected/7e6a87fb-3fc3-426b-b8fc-3bec076c5544-kube-api-access-h5wmj\") pod \"cert-manager-webhook-5655c58dd6-l8zkg\" (UID: \"7e6a87fb-3fc3-426b-b8fc-3bec076c5544\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-l8zkg" Dec 06 06:35:55 crc kubenswrapper[4823]: I1206 06:35:55.212365 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxpkv\" (UniqueName: \"kubernetes.io/projected/16f9d7d8-2452-47c9-ad9a-468a067e74bc-kube-api-access-rxpkv\") pod \"cert-manager-5b446d88c5-gzrvf\" (UID: \"16f9d7d8-2452-47c9-ad9a-468a067e74bc\") " pod="cert-manager/cert-manager-5b446d88c5-gzrvf" Dec 06 06:35:55 crc kubenswrapper[4823]: I1206 06:35:55.212462 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5wmj\" (UniqueName: \"kubernetes.io/projected/7e6a87fb-3fc3-426b-b8fc-3bec076c5544-kube-api-access-h5wmj\") pod \"cert-manager-webhook-5655c58dd6-l8zkg\" (UID: \"7e6a87fb-3fc3-426b-b8fc-3bec076c5544\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-l8zkg" Dec 06 06:35:55 crc kubenswrapper[4823]: I1206 06:35:55.292040 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-sg6d6" Dec 06 06:35:55 crc kubenswrapper[4823]: I1206 06:35:55.327368 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-l8zkg" Dec 06 06:35:55 crc kubenswrapper[4823]: I1206 06:35:55.337117 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-gzrvf" Dec 06 06:35:55 crc kubenswrapper[4823]: I1206 06:35:55.605597 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-l8zkg"] Dec 06 06:35:55 crc kubenswrapper[4823]: I1206 06:35:55.613922 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 06:35:55 crc kubenswrapper[4823]: I1206 06:35:55.623861 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-gzrvf"] Dec 06 06:35:55 crc kubenswrapper[4823]: W1206 06:35:55.629579 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16f9d7d8_2452_47c9_ad9a_468a067e74bc.slice/crio-58f1739c0674290586058ad9b089752c503215d9168c0308514fe1cc7cb5ead0 WatchSource:0}: Error finding container 58f1739c0674290586058ad9b089752c503215d9168c0308514fe1cc7cb5ead0: Status 404 returned error can't find the container with id 58f1739c0674290586058ad9b089752c503215d9168c0308514fe1cc7cb5ead0 Dec 06 06:35:55 crc kubenswrapper[4823]: I1206 06:35:55.733999 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-sg6d6"] Dec 06 06:35:55 crc kubenswrapper[4823]: W1206 06:35:55.742167 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8633755c_f571_4f49_bb10_a2ce86967ce6.slice/crio-3ec1628f9cf27822602376591d1242315bdb0371b806333e5803db5a2d256f55 WatchSource:0}: Error finding container 3ec1628f9cf27822602376591d1242315bdb0371b806333e5803db5a2d256f55: Status 404 returned error can't find the container with id 3ec1628f9cf27822602376591d1242315bdb0371b806333e5803db5a2d256f55 Dec 06 06:35:56 crc kubenswrapper[4823]: I1206 06:35:56.563437 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-gzrvf" event={"ID":"16f9d7d8-2452-47c9-ad9a-468a067e74bc","Type":"ContainerStarted","Data":"58f1739c0674290586058ad9b089752c503215d9168c0308514fe1cc7cb5ead0"} Dec 06 06:35:56 crc kubenswrapper[4823]: I1206 06:35:56.577721 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-sg6d6" event={"ID":"8633755c-f571-4f49-bb10-a2ce86967ce6","Type":"ContainerStarted","Data":"3ec1628f9cf27822602376591d1242315bdb0371b806333e5803db5a2d256f55"} Dec 06 06:35:56 crc kubenswrapper[4823]: I1206 06:35:56.583146 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-l8zkg" event={"ID":"7e6a87fb-3fc3-426b-b8fc-3bec076c5544","Type":"ContainerStarted","Data":"4538308edff7b7ac584ac7f849df93ca98f1b0a5ee0b8215b134511f6519c74d"} Dec 06 06:35:58 crc kubenswrapper[4823]: I1206 06:35:58.597132 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-l8zkg" event={"ID":"7e6a87fb-3fc3-426b-b8fc-3bec076c5544","Type":"ContainerStarted","Data":"1930c33e02ebb0c271145794c3f136a3ad11ae9c60b11adbbba0794cfd944680"} Dec 06 06:35:58 crc kubenswrapper[4823]: I1206 06:35:58.598379 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-l8zkg" Dec 06 06:35:58 crc kubenswrapper[4823]: I1206 06:35:58.599883 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-gzrvf" event={"ID":"16f9d7d8-2452-47c9-ad9a-468a067e74bc","Type":"ContainerStarted","Data":"611aaf4df7bbbca3b94f081c2719c426aa613213d075c06eaf8fdc7cfa85f92e"} Dec 06 06:35:58 crc kubenswrapper[4823]: I1206 06:35:58.637595 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-l8zkg" podStartSLOduration=2.062645055 podStartE2EDuration="4.637572557s" podCreationTimestamp="2025-12-06 06:35:54 +0000 UTC" firstStartedPulling="2025-12-06 06:35:55.613717763 +0000 UTC m=+656.899469713" lastFinishedPulling="2025-12-06 06:35:58.188645255 +0000 UTC m=+659.474397215" observedRunningTime="2025-12-06 06:35:58.616405984 +0000 UTC m=+659.902157944" watchObservedRunningTime="2025-12-06 06:35:58.637572557 +0000 UTC m=+659.923324537" Dec 06 06:35:59 crc kubenswrapper[4823]: I1206 06:35:59.160881 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-gzrvf" podStartSLOduration=2.548209447 podStartE2EDuration="5.160847462s" podCreationTimestamp="2025-12-06 06:35:54 +0000 UTC" firstStartedPulling="2025-12-06 06:35:55.631404305 +0000 UTC m=+656.917156265" lastFinishedPulling="2025-12-06 06:35:58.24404232 +0000 UTC m=+659.529794280" observedRunningTime="2025-12-06 06:35:58.639698518 +0000 UTC m=+659.925450498" watchObservedRunningTime="2025-12-06 06:35:59.160847462 +0000 UTC m=+660.446599422" Dec 06 06:35:59 crc kubenswrapper[4823]: I1206 06:35:59.606597 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-sg6d6" event={"ID":"8633755c-f571-4f49-bb10-a2ce86967ce6","Type":"ContainerStarted","Data":"a4b597a02e56d401d85f66a2349677134a25191f2ca307be504e1ceab95e06c4"} Dec 06 06:35:59 crc kubenswrapper[4823]: I1206 06:35:59.623571 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-sg6d6" podStartSLOduration=2.322011004 podStartE2EDuration="5.623551801s" podCreationTimestamp="2025-12-06 06:35:54 +0000 UTC" firstStartedPulling="2025-12-06 06:35:55.745408156 +0000 UTC m=+657.031160116" lastFinishedPulling="2025-12-06 06:35:59.046948953 +0000 UTC m=+660.332700913" observedRunningTime="2025-12-06 06:35:59.619900006 +0000 UTC m=+660.905651966" watchObservedRunningTime="2025-12-06 06:35:59.623551801 +0000 UTC m=+660.909303761" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.091074 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rr4m5"] Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.091782 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="ovn-controller" containerID="cri-o://43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb" gracePeriod=30 Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.091865 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="northd" containerID="cri-o://8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1" gracePeriod=30 Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.091915 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="kube-rbac-proxy-node" containerID="cri-o://486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f" gracePeriod=30 Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.091904 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49" gracePeriod=30 Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.091929 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="ovn-acl-logging" containerID="cri-o://772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104" gracePeriod=30 Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.091978 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="sbdb" containerID="cri-o://30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1" gracePeriod=30 Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.091830 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="nbdb" containerID="cri-o://934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc" gracePeriod=30 Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.131535 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="ovnkube-controller" containerID="cri-o://9961728a73d27d7249b1c1628309f8bdf627d8fc1d08120ec5b900a351b6ff9c" gracePeriod=30 Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.329938 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-l8zkg" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.640091 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bldh8_e2faf943-388e-4105-a30d-b0bbb041f8e0/kube-multus/2.log" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.640637 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bldh8_e2faf943-388e-4105-a30d-b0bbb041f8e0/kube-multus/1.log" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.640762 4823 generic.go:334] "Generic (PLEG): container finished" podID="e2faf943-388e-4105-a30d-b0bbb041f8e0" containerID="38337c6d04bc6b2fa4ecc741d7ce7660c69d8d9d203cc577034850b4c54a80af" exitCode=2 Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.640840 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bldh8" event={"ID":"e2faf943-388e-4105-a30d-b0bbb041f8e0","Type":"ContainerDied","Data":"38337c6d04bc6b2fa4ecc741d7ce7660c69d8d9d203cc577034850b4c54a80af"} Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.640888 4823 scope.go:117] "RemoveContainer" containerID="31fc1a3302d6dbc392cfb5425747a5c31475388f6af4c498ecc75f33ce7740b2" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.641453 4823 scope.go:117] "RemoveContainer" containerID="38337c6d04bc6b2fa4ecc741d7ce7660c69d8d9d203cc577034850b4c54a80af" Dec 06 06:36:05 crc kubenswrapper[4823]: E1206 06:36:05.641692 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-bldh8_openshift-multus(e2faf943-388e-4105-a30d-b0bbb041f8e0)\"" pod="openshift-multus/multus-bldh8" podUID="e2faf943-388e-4105-a30d-b0bbb041f8e0" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.647585 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rr4m5_d7a8c395-bca0-48a5-bb35-10e956e85a2a/ovnkube-controller/3.log" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.650390 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rr4m5_d7a8c395-bca0-48a5-bb35-10e956e85a2a/ovn-acl-logging/0.log" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.650920 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rr4m5_d7a8c395-bca0-48a5-bb35-10e956e85a2a/ovn-controller/0.log" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.653412 4823 generic.go:334] "Generic (PLEG): container finished" podID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerID="9961728a73d27d7249b1c1628309f8bdf627d8fc1d08120ec5b900a351b6ff9c" exitCode=0 Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.653464 4823 generic.go:334] "Generic (PLEG): container finished" podID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerID="30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1" exitCode=0 Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.653475 4823 generic.go:334] "Generic (PLEG): container finished" podID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerID="934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc" exitCode=0 Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.653485 4823 generic.go:334] "Generic (PLEG): container finished" podID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerID="8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1" exitCode=0 Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.653494 4823 generic.go:334] "Generic (PLEG): container finished" podID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerID="65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49" exitCode=0 Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.653500 4823 generic.go:334] "Generic (PLEG): container finished" podID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerID="486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f" exitCode=0 Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.653508 4823 generic.go:334] "Generic (PLEG): container finished" podID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerID="772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104" exitCode=143 Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.653517 4823 generic.go:334] "Generic (PLEG): container finished" podID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerID="43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb" exitCode=143 Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.653548 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" event={"ID":"d7a8c395-bca0-48a5-bb35-10e956e85a2a","Type":"ContainerDied","Data":"9961728a73d27d7249b1c1628309f8bdf627d8fc1d08120ec5b900a351b6ff9c"} Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.653580 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" event={"ID":"d7a8c395-bca0-48a5-bb35-10e956e85a2a","Type":"ContainerDied","Data":"30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1"} Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.653590 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" event={"ID":"d7a8c395-bca0-48a5-bb35-10e956e85a2a","Type":"ContainerDied","Data":"934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc"} Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.653599 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" event={"ID":"d7a8c395-bca0-48a5-bb35-10e956e85a2a","Type":"ContainerDied","Data":"8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1"} Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.653611 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" event={"ID":"d7a8c395-bca0-48a5-bb35-10e956e85a2a","Type":"ContainerDied","Data":"65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49"} Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.653622 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" event={"ID":"d7a8c395-bca0-48a5-bb35-10e956e85a2a","Type":"ContainerDied","Data":"486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f"} Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.653633 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" event={"ID":"d7a8c395-bca0-48a5-bb35-10e956e85a2a","Type":"ContainerDied","Data":"772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104"} Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.653644 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" event={"ID":"d7a8c395-bca0-48a5-bb35-10e956e85a2a","Type":"ContainerDied","Data":"43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb"} Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.692094 4823 scope.go:117] "RemoveContainer" containerID="40de68b30aaf6ce3782c5d327a806ff7e1645ac533fc11832388b550a9fd3726" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.930686 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rr4m5_d7a8c395-bca0-48a5-bb35-10e956e85a2a/ovn-acl-logging/0.log" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.931272 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rr4m5_d7a8c395-bca0-48a5-bb35-10e956e85a2a/ovn-controller/0.log" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.931748 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.984439 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6znmn"] Dec 06 06:36:05 crc kubenswrapper[4823]: E1206 06:36:05.984647 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="ovnkube-controller" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.984676 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="ovnkube-controller" Dec 06 06:36:05 crc kubenswrapper[4823]: E1206 06:36:05.984685 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="kubecfg-setup" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.984692 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="kubecfg-setup" Dec 06 06:36:05 crc kubenswrapper[4823]: E1206 06:36:05.984700 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="sbdb" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.984706 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="sbdb" Dec 06 06:36:05 crc kubenswrapper[4823]: E1206 06:36:05.984718 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="ovn-controller" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.984725 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="ovn-controller" Dec 06 06:36:05 crc kubenswrapper[4823]: E1206 06:36:05.984733 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="northd" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.984738 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="northd" Dec 06 06:36:05 crc kubenswrapper[4823]: E1206 06:36:05.984744 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="ovnkube-controller" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.984764 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="ovnkube-controller" Dec 06 06:36:05 crc kubenswrapper[4823]: E1206 06:36:05.984772 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="ovnkube-controller" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.984778 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="ovnkube-controller" Dec 06 06:36:05 crc kubenswrapper[4823]: E1206 06:36:05.984787 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="nbdb" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.984793 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="nbdb" Dec 06 06:36:05 crc kubenswrapper[4823]: E1206 06:36:05.984803 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="kube-rbac-proxy-node" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.984808 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="kube-rbac-proxy-node" Dec 06 06:36:05 crc kubenswrapper[4823]: E1206 06:36:05.984815 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="ovn-acl-logging" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.984821 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="ovn-acl-logging" Dec 06 06:36:05 crc kubenswrapper[4823]: E1206 06:36:05.984828 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="kube-rbac-proxy-ovn-metrics" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.984833 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="kube-rbac-proxy-ovn-metrics" Dec 06 06:36:05 crc kubenswrapper[4823]: E1206 06:36:05.984840 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="ovnkube-controller" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.984846 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="ovnkube-controller" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.985000 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="ovnkube-controller" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.985008 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="ovn-controller" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.985017 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="ovn-acl-logging" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.985028 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="northd" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.985034 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="sbdb" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.985040 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="nbdb" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.985047 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="kube-rbac-proxy-ovn-metrics" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.985055 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="ovnkube-controller" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.985060 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="ovnkube-controller" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.985067 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="kube-rbac-proxy-node" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.985075 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="ovnkube-controller" Dec 06 06:36:05 crc kubenswrapper[4823]: E1206 06:36:05.985162 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="ovnkube-controller" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.985170 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="ovnkube-controller" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.985249 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" containerName="ovnkube-controller" Dec 06 06:36:05 crc kubenswrapper[4823]: I1206 06:36:05.987394 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.029012 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-host-run-ovn-kubernetes\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.029122 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-host-cni-netd\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.029161 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aa482e37-6e7c-4caa-9d7b-f860abf53aee-ovnkube-script-lib\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.029199 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa482e37-6e7c-4caa-9d7b-f860abf53aee-env-overrides\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.029696 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-log-socket\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.029737 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-host-slash\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.029765 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-host-kubelet\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.029788 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znt9l\" (UniqueName: \"kubernetes.io/projected/aa482e37-6e7c-4caa-9d7b-f860abf53aee-kube-api-access-znt9l\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.029822 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-host-cni-bin\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.030023 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-host-run-netns\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.030070 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-etc-openvswitch\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.030118 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-systemd-units\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.030164 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-var-lib-openvswitch\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.030200 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-run-systemd\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.030252 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-run-openvswitch\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.030283 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa482e37-6e7c-4caa-9d7b-f860abf53aee-ovn-node-metrics-cert\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.030310 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa482e37-6e7c-4caa-9d7b-f860abf53aee-ovnkube-config\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.030349 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-node-log\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.030379 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-run-ovn\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.030448 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131182 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-run-ovn-kubernetes\") pod \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131244 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-run-netns\") pod \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131266 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-kubelet\") pod \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131289 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-run-systemd\") pod \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131327 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131311 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "d7a8c395-bca0-48a5-bb35-10e956e85a2a" (UID: "d7a8c395-bca0-48a5-bb35-10e956e85a2a"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131388 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "d7a8c395-bca0-48a5-bb35-10e956e85a2a" (UID: "d7a8c395-bca0-48a5-bb35-10e956e85a2a"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131414 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "d7a8c395-bca0-48a5-bb35-10e956e85a2a" (UID: "d7a8c395-bca0-48a5-bb35-10e956e85a2a"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131315 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "d7a8c395-bca0-48a5-bb35-10e956e85a2a" (UID: "d7a8c395-bca0-48a5-bb35-10e956e85a2a"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131346 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-systemd-units\") pod \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131450 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "d7a8c395-bca0-48a5-bb35-10e956e85a2a" (UID: "d7a8c395-bca0-48a5-bb35-10e956e85a2a"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131503 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-etc-openvswitch\") pod \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131546 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d7a8c395-bca0-48a5-bb35-10e956e85a2a-ovnkube-config\") pod \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131568 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-run-openvswitch\") pod \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131596 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-log-socket\") pod \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131605 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "d7a8c395-bca0-48a5-bb35-10e956e85a2a" (UID: "d7a8c395-bca0-48a5-bb35-10e956e85a2a"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131634 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "d7a8c395-bca0-48a5-bb35-10e956e85a2a" (UID: "d7a8c395-bca0-48a5-bb35-10e956e85a2a"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131636 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d7a8c395-bca0-48a5-bb35-10e956e85a2a-env-overrides\") pod \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131758 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-var-lib-openvswitch\") pod \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131798 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnbgp\" (UniqueName: \"kubernetes.io/projected/d7a8c395-bca0-48a5-bb35-10e956e85a2a-kube-api-access-qnbgp\") pod \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131806 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-log-socket" (OuterVolumeSpecName: "log-socket") pod "d7a8c395-bca0-48a5-bb35-10e956e85a2a" (UID: "d7a8c395-bca0-48a5-bb35-10e956e85a2a"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131850 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "d7a8c395-bca0-48a5-bb35-10e956e85a2a" (UID: "d7a8c395-bca0-48a5-bb35-10e956e85a2a"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131831 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-cni-netd\") pod \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131921 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-slash\") pod \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131928 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "d7a8c395-bca0-48a5-bb35-10e956e85a2a" (UID: "d7a8c395-bca0-48a5-bb35-10e956e85a2a"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131971 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-run-ovn\") pod \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.131987 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-slash" (OuterVolumeSpecName: "host-slash") pod "d7a8c395-bca0-48a5-bb35-10e956e85a2a" (UID: "d7a8c395-bca0-48a5-bb35-10e956e85a2a"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.132019 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d7a8c395-bca0-48a5-bb35-10e956e85a2a-ovnkube-script-lib\") pod \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.132059 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-node-log\") pod \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.132078 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7a8c395-bca0-48a5-bb35-10e956e85a2a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d7a8c395-bca0-48a5-bb35-10e956e85a2a" (UID: "d7a8c395-bca0-48a5-bb35-10e956e85a2a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.132089 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d7a8c395-bca0-48a5-bb35-10e956e85a2a-ovn-node-metrics-cert\") pod \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.132084 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "d7a8c395-bca0-48a5-bb35-10e956e85a2a" (UID: "d7a8c395-bca0-48a5-bb35-10e956e85a2a"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.132107 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7a8c395-bca0-48a5-bb35-10e956e85a2a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d7a8c395-bca0-48a5-bb35-10e956e85a2a" (UID: "d7a8c395-bca0-48a5-bb35-10e956e85a2a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.132116 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-cni-bin\") pod \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\" (UID: \"d7a8c395-bca0-48a5-bb35-10e956e85a2a\") " Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.132134 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-node-log" (OuterVolumeSpecName: "node-log") pod "d7a8c395-bca0-48a5-bb35-10e956e85a2a" (UID: "d7a8c395-bca0-48a5-bb35-10e956e85a2a"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.132143 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "d7a8c395-bca0-48a5-bb35-10e956e85a2a" (UID: "d7a8c395-bca0-48a5-bb35-10e956e85a2a"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.132463 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-systemd-units\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.132481 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7a8c395-bca0-48a5-bb35-10e956e85a2a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "d7a8c395-bca0-48a5-bb35-10e956e85a2a" (UID: "d7a8c395-bca0-48a5-bb35-10e956e85a2a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.132519 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-var-lib-openvswitch\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.132529 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-systemd-units\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.132548 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-run-systemd\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.132574 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-var-lib-openvswitch\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.132605 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-run-systemd\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.132615 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-run-openvswitch\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.132638 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa482e37-6e7c-4caa-9d7b-f860abf53aee-ovn-node-metrics-cert\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.132687 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-run-openvswitch\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.132805 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa482e37-6e7c-4caa-9d7b-f860abf53aee-ovnkube-config\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.132871 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-node-log\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.132902 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-run-ovn\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.132935 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.132939 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-node-log\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.132992 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133011 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-host-run-ovn-kubernetes\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133032 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-run-ovn\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133044 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-host-cni-netd\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133065 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-host-run-ovn-kubernetes\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133074 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aa482e37-6e7c-4caa-9d7b-f860abf53aee-ovnkube-script-lib\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133095 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-host-cni-netd\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133100 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa482e37-6e7c-4caa-9d7b-f860abf53aee-env-overrides\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133136 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-log-socket\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133163 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-host-slash\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133187 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-host-kubelet\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133211 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znt9l\" (UniqueName: \"kubernetes.io/projected/aa482e37-6e7c-4caa-9d7b-f860abf53aee-kube-api-access-znt9l\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133248 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-host-cni-bin\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133276 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-host-run-netns\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133299 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-etc-openvswitch\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133300 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-host-slash\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133357 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-host-cni-bin\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133384 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-host-kubelet\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133396 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-log-socket\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133442 4823 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133451 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-etc-openvswitch\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133463 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aa482e37-6e7c-4caa-9d7b-f860abf53aee-host-run-netns\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133592 4823 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133608 4823 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133623 4823 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133636 4823 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133645 4823 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133655 4823 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d7a8c395-bca0-48a5-bb35-10e956e85a2a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133677 4823 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133686 4823 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-log-socket\") on node \"crc\" DevicePath \"\"" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133695 4823 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d7a8c395-bca0-48a5-bb35-10e956e85a2a-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133703 4823 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133716 4823 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133725 4823 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-slash\") on node \"crc\" DevicePath \"\"" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133739 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa482e37-6e7c-4caa-9d7b-f860abf53aee-env-overrides\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133752 4823 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133773 4823 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d7a8c395-bca0-48a5-bb35-10e956e85a2a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133786 4823 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-node-log\") on node \"crc\" DevicePath \"\"" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133795 4823 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.134271 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aa482e37-6e7c-4caa-9d7b-f860abf53aee-ovnkube-script-lib\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.133799 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa482e37-6e7c-4caa-9d7b-f860abf53aee-ovnkube-config\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.137220 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa482e37-6e7c-4caa-9d7b-f860abf53aee-ovn-node-metrics-cert\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.137856 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7a8c395-bca0-48a5-bb35-10e956e85a2a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "d7a8c395-bca0-48a5-bb35-10e956e85a2a" (UID: "d7a8c395-bca0-48a5-bb35-10e956e85a2a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.137912 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7a8c395-bca0-48a5-bb35-10e956e85a2a-kube-api-access-qnbgp" (OuterVolumeSpecName: "kube-api-access-qnbgp") pod "d7a8c395-bca0-48a5-bb35-10e956e85a2a" (UID: "d7a8c395-bca0-48a5-bb35-10e956e85a2a"). InnerVolumeSpecName "kube-api-access-qnbgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.145568 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "d7a8c395-bca0-48a5-bb35-10e956e85a2a" (UID: "d7a8c395-bca0-48a5-bb35-10e956e85a2a"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.150320 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znt9l\" (UniqueName: \"kubernetes.io/projected/aa482e37-6e7c-4caa-9d7b-f860abf53aee-kube-api-access-znt9l\") pod \"ovnkube-node-6znmn\" (UID: \"aa482e37-6e7c-4caa-9d7b-f860abf53aee\") " pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.234183 4823 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d7a8c395-bca0-48a5-bb35-10e956e85a2a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.234215 4823 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d7a8c395-bca0-48a5-bb35-10e956e85a2a-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.234223 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnbgp\" (UniqueName: \"kubernetes.io/projected/d7a8c395-bca0-48a5-bb35-10e956e85a2a-kube-api-access-qnbgp\") on node \"crc\" DevicePath \"\"" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.304405 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:06 crc kubenswrapper[4823]: E1206 06:36:06.553204 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa482e37_6e7c_4caa_9d7b_f860abf53aee.slice/crio-607016a50b72d84dfd44d68fa6fa48d9ea91b39a29bb8544a0383de7623a2720.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa482e37_6e7c_4caa_9d7b_f860abf53aee.slice/crio-conmon-607016a50b72d84dfd44d68fa6fa48d9ea91b39a29bb8544a0383de7623a2720.scope\": RecentStats: unable to find data in memory cache]" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.661628 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bldh8_e2faf943-388e-4105-a30d-b0bbb041f8e0/kube-multus/2.log" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.664952 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rr4m5_d7a8c395-bca0-48a5-bb35-10e956e85a2a/ovn-acl-logging/0.log" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.665413 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rr4m5_d7a8c395-bca0-48a5-bb35-10e956e85a2a/ovn-controller/0.log" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.665815 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" event={"ID":"d7a8c395-bca0-48a5-bb35-10e956e85a2a","Type":"ContainerDied","Data":"77bffb303293e67375fb94850372985bda20ea557ef14205104486a3fca9e076"} Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.665858 4823 scope.go:117] "RemoveContainer" containerID="9961728a73d27d7249b1c1628309f8bdf627d8fc1d08120ec5b900a351b6ff9c" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.665876 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rr4m5" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.667169 4823 generic.go:334] "Generic (PLEG): container finished" podID="aa482e37-6e7c-4caa-9d7b-f860abf53aee" containerID="607016a50b72d84dfd44d68fa6fa48d9ea91b39a29bb8544a0383de7623a2720" exitCode=0 Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.667235 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" event={"ID":"aa482e37-6e7c-4caa-9d7b-f860abf53aee","Type":"ContainerDied","Data":"607016a50b72d84dfd44d68fa6fa48d9ea91b39a29bb8544a0383de7623a2720"} Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.667249 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" event={"ID":"aa482e37-6e7c-4caa-9d7b-f860abf53aee","Type":"ContainerStarted","Data":"a84c298edce36cb05bc691197779f6ee3b85d348d76c23b8364b21c4c28546c5"} Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.685577 4823 scope.go:117] "RemoveContainer" containerID="30f667f39dc297496a96b4e7485fcb6b8a259045dab120c027414eae9ffb30d1" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.711481 4823 scope.go:117] "RemoveContainer" containerID="934a696d5bd80607823c5b29ddf16aa1ad3fb10f51eabea0fdb69be3e8d77edc" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.725818 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rr4m5"] Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.739469 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rr4m5"] Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.755865 4823 scope.go:117] "RemoveContainer" containerID="8bba4efabafbc18b324a025d5f5f2be135b8d6914f4222831f0de9f060baa6b1" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.796867 4823 scope.go:117] "RemoveContainer" containerID="65aa5633652833e23cce170fc376f93d675455ba1183c86abccea1a1b3150c49" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.813034 4823 scope.go:117] "RemoveContainer" containerID="486870ed3923f5391c9f457f8ff8a3aa81044cdd3cd08b20d922811d8442243f" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.830052 4823 scope.go:117] "RemoveContainer" containerID="772ab216915a5d63335ebd2327178c9c1082f12d9d530eaa25652015cd0fa104" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.846051 4823 scope.go:117] "RemoveContainer" containerID="43de20584f5489d241743459f593bf1e883ac19da37046973e3a011cff9b0dcb" Dec 06 06:36:06 crc kubenswrapper[4823]: I1206 06:36:06.863510 4823 scope.go:117] "RemoveContainer" containerID="baf8a7e66810bc5cf3aeb7df60b13f35ab783791d28dfc59db5d66b8be61c5f4" Dec 06 06:36:07 crc kubenswrapper[4823]: I1206 06:36:07.148337 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7a8c395-bca0-48a5-bb35-10e956e85a2a" path="/var/lib/kubelet/pods/d7a8c395-bca0-48a5-bb35-10e956e85a2a/volumes" Dec 06 06:36:07 crc kubenswrapper[4823]: I1206 06:36:07.676386 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" event={"ID":"aa482e37-6e7c-4caa-9d7b-f860abf53aee","Type":"ContainerStarted","Data":"33fb947851e7cc3ee647c0570034e415b3c2740d11be071c405261373b2e1607"} Dec 06 06:36:07 crc kubenswrapper[4823]: I1206 06:36:07.676425 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" event={"ID":"aa482e37-6e7c-4caa-9d7b-f860abf53aee","Type":"ContainerStarted","Data":"b896a75a9afa75b664dc3483a05a2c13f2f681bbe6519bc132703df4f2481b7d"} Dec 06 06:36:07 crc kubenswrapper[4823]: I1206 06:36:07.676435 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" event={"ID":"aa482e37-6e7c-4caa-9d7b-f860abf53aee","Type":"ContainerStarted","Data":"d3966d8db6b8f0b90b87f461647db4a4c614c93e5a91224da0edde6abd617a76"} Dec 06 06:36:07 crc kubenswrapper[4823]: I1206 06:36:07.676472 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" event={"ID":"aa482e37-6e7c-4caa-9d7b-f860abf53aee","Type":"ContainerStarted","Data":"24cf6dfda3f33f2fb39c90e6501db61bcbb6f3e6fadcabd30966a768613520f9"} Dec 06 06:36:07 crc kubenswrapper[4823]: I1206 06:36:07.676481 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" event={"ID":"aa482e37-6e7c-4caa-9d7b-f860abf53aee","Type":"ContainerStarted","Data":"e7a67587acec4a62e6fc50d9f778680f4beadef9029c16c77bca11bdb48d1f75"} Dec 06 06:36:07 crc kubenswrapper[4823]: I1206 06:36:07.676489 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" event={"ID":"aa482e37-6e7c-4caa-9d7b-f860abf53aee","Type":"ContainerStarted","Data":"6606dd937090eec3b942c954f37d8c9c1ab36046c206ba0ba4c706f411076894"} Dec 06 06:36:10 crc kubenswrapper[4823]: I1206 06:36:10.697104 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" event={"ID":"aa482e37-6e7c-4caa-9d7b-f860abf53aee","Type":"ContainerStarted","Data":"5f6930f74a72bbb91b2249839ddf46b51ead939a157e31d083c9b70415aaee1d"} Dec 06 06:36:12 crc kubenswrapper[4823]: I1206 06:36:12.711151 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" event={"ID":"aa482e37-6e7c-4caa-9d7b-f860abf53aee","Type":"ContainerStarted","Data":"a3ca942fb9df240f54acc210b755574a3c83a66fed3e7c9c28ec91421282e4cc"} Dec 06 06:36:12 crc kubenswrapper[4823]: I1206 06:36:12.711764 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:12 crc kubenswrapper[4823]: I1206 06:36:12.711779 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:12 crc kubenswrapper[4823]: I1206 06:36:12.740902 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:12 crc kubenswrapper[4823]: I1206 06:36:12.742269 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" podStartSLOduration=7.742259536 podStartE2EDuration="7.742259536s" podCreationTimestamp="2025-12-06 06:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:36:12.741777932 +0000 UTC m=+674.027529902" watchObservedRunningTime="2025-12-06 06:36:12.742259536 +0000 UTC m=+674.028011496" Dec 06 06:36:13 crc kubenswrapper[4823]: I1206 06:36:13.717868 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:13 crc kubenswrapper[4823]: I1206 06:36:13.745401 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:17 crc kubenswrapper[4823]: I1206 06:36:17.141450 4823 scope.go:117] "RemoveContainer" containerID="38337c6d04bc6b2fa4ecc741d7ce7660c69d8d9d203cc577034850b4c54a80af" Dec 06 06:36:17 crc kubenswrapper[4823]: E1206 06:36:17.142016 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-bldh8_openshift-multus(e2faf943-388e-4105-a30d-b0bbb041f8e0)\"" pod="openshift-multus/multus-bldh8" podUID="e2faf943-388e-4105-a30d-b0bbb041f8e0" Dec 06 06:36:31 crc kubenswrapper[4823]: I1206 06:36:31.141192 4823 scope.go:117] "RemoveContainer" containerID="38337c6d04bc6b2fa4ecc741d7ce7660c69d8d9d203cc577034850b4c54a80af" Dec 06 06:36:31 crc kubenswrapper[4823]: I1206 06:36:31.809450 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bldh8_e2faf943-388e-4105-a30d-b0bbb041f8e0/kube-multus/2.log" Dec 06 06:36:31 crc kubenswrapper[4823]: I1206 06:36:31.809800 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bldh8" event={"ID":"e2faf943-388e-4105-a30d-b0bbb041f8e0","Type":"ContainerStarted","Data":"8c6aa8780501cd7bd3975428a71595600b18b732baa2196d78d883af9e2ca56d"} Dec 06 06:36:34 crc kubenswrapper[4823]: I1206 06:36:34.463428 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v"] Dec 06 06:36:34 crc kubenswrapper[4823]: I1206 06:36:34.465207 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v" Dec 06 06:36:34 crc kubenswrapper[4823]: I1206 06:36:34.467782 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 06 06:36:34 crc kubenswrapper[4823]: I1206 06:36:34.481602 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v"] Dec 06 06:36:34 crc kubenswrapper[4823]: I1206 06:36:34.580706 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htc7f\" (UniqueName: \"kubernetes.io/projected/3a07d62c-425a-451a-a937-aadc80058570-kube-api-access-htc7f\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v\" (UID: \"3a07d62c-425a-451a-a937-aadc80058570\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v" Dec 06 06:36:34 crc kubenswrapper[4823]: I1206 06:36:34.580811 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a07d62c-425a-451a-a937-aadc80058570-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v\" (UID: \"3a07d62c-425a-451a-a937-aadc80058570\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v" Dec 06 06:36:34 crc kubenswrapper[4823]: I1206 06:36:34.580918 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a07d62c-425a-451a-a937-aadc80058570-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v\" (UID: \"3a07d62c-425a-451a-a937-aadc80058570\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v" Dec 06 06:36:34 crc kubenswrapper[4823]: I1206 06:36:34.692110 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a07d62c-425a-451a-a937-aadc80058570-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v\" (UID: \"3a07d62c-425a-451a-a937-aadc80058570\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v" Dec 06 06:36:34 crc kubenswrapper[4823]: I1206 06:36:34.692188 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htc7f\" (UniqueName: \"kubernetes.io/projected/3a07d62c-425a-451a-a937-aadc80058570-kube-api-access-htc7f\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v\" (UID: \"3a07d62c-425a-451a-a937-aadc80058570\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v" Dec 06 06:36:34 crc kubenswrapper[4823]: I1206 06:36:34.692238 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a07d62c-425a-451a-a937-aadc80058570-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v\" (UID: \"3a07d62c-425a-451a-a937-aadc80058570\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v" Dec 06 06:36:34 crc kubenswrapper[4823]: I1206 06:36:34.692856 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a07d62c-425a-451a-a937-aadc80058570-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v\" (UID: \"3a07d62c-425a-451a-a937-aadc80058570\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v" Dec 06 06:36:34 crc kubenswrapper[4823]: I1206 06:36:34.692932 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a07d62c-425a-451a-a937-aadc80058570-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v\" (UID: \"3a07d62c-425a-451a-a937-aadc80058570\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v" Dec 06 06:36:34 crc kubenswrapper[4823]: I1206 06:36:34.720301 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htc7f\" (UniqueName: \"kubernetes.io/projected/3a07d62c-425a-451a-a937-aadc80058570-kube-api-access-htc7f\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v\" (UID: \"3a07d62c-425a-451a-a937-aadc80058570\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v" Dec 06 06:36:34 crc kubenswrapper[4823]: I1206 06:36:34.784487 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v" Dec 06 06:36:34 crc kubenswrapper[4823]: I1206 06:36:34.956139 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v"] Dec 06 06:36:35 crc kubenswrapper[4823]: I1206 06:36:35.835830 4823 generic.go:334] "Generic (PLEG): container finished" podID="3a07d62c-425a-451a-a937-aadc80058570" containerID="28141ff1f5abf257258ae390ecb434137702d009420e41600dbfd09cfbdb0cce" exitCode=0 Dec 06 06:36:35 crc kubenswrapper[4823]: I1206 06:36:35.835922 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v" event={"ID":"3a07d62c-425a-451a-a937-aadc80058570","Type":"ContainerDied","Data":"28141ff1f5abf257258ae390ecb434137702d009420e41600dbfd09cfbdb0cce"} Dec 06 06:36:35 crc kubenswrapper[4823]: I1206 06:36:35.836170 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v" event={"ID":"3a07d62c-425a-451a-a937-aadc80058570","Type":"ContainerStarted","Data":"a5c703e5024db13d7a29fa4b381ab8d207b7606abd3d6fd68b3be1da5b18f7e8"} Dec 06 06:36:36 crc kubenswrapper[4823]: I1206 06:36:36.328395 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6znmn" Dec 06 06:36:41 crc kubenswrapper[4823]: I1206 06:36:41.867411 4823 generic.go:334] "Generic (PLEG): container finished" podID="3a07d62c-425a-451a-a937-aadc80058570" containerID="574a41ed2b5fe0816ee86b081a836d41c722ac4bae1acb804f143b5c60f7ce3f" exitCode=0 Dec 06 06:36:41 crc kubenswrapper[4823]: I1206 06:36:41.867528 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v" event={"ID":"3a07d62c-425a-451a-a937-aadc80058570","Type":"ContainerDied","Data":"574a41ed2b5fe0816ee86b081a836d41c722ac4bae1acb804f143b5c60f7ce3f"} Dec 06 06:36:42 crc kubenswrapper[4823]: I1206 06:36:42.877183 4823 generic.go:334] "Generic (PLEG): container finished" podID="3a07d62c-425a-451a-a937-aadc80058570" containerID="844f1dce026f0c7b1b21ae6b121caf0b428741c5e37d1512537fb25c3e7c3204" exitCode=0 Dec 06 06:36:42 crc kubenswrapper[4823]: I1206 06:36:42.877250 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v" event={"ID":"3a07d62c-425a-451a-a937-aadc80058570","Type":"ContainerDied","Data":"844f1dce026f0c7b1b21ae6b121caf0b428741c5e37d1512537fb25c3e7c3204"} Dec 06 06:36:44 crc kubenswrapper[4823]: I1206 06:36:44.094453 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v" Dec 06 06:36:44 crc kubenswrapper[4823]: I1206 06:36:44.218416 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a07d62c-425a-451a-a937-aadc80058570-bundle\") pod \"3a07d62c-425a-451a-a937-aadc80058570\" (UID: \"3a07d62c-425a-451a-a937-aadc80058570\") " Dec 06 06:36:44 crc kubenswrapper[4823]: I1206 06:36:44.218498 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htc7f\" (UniqueName: \"kubernetes.io/projected/3a07d62c-425a-451a-a937-aadc80058570-kube-api-access-htc7f\") pod \"3a07d62c-425a-451a-a937-aadc80058570\" (UID: \"3a07d62c-425a-451a-a937-aadc80058570\") " Dec 06 06:36:44 crc kubenswrapper[4823]: I1206 06:36:44.218547 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a07d62c-425a-451a-a937-aadc80058570-util\") pod \"3a07d62c-425a-451a-a937-aadc80058570\" (UID: \"3a07d62c-425a-451a-a937-aadc80058570\") " Dec 06 06:36:44 crc kubenswrapper[4823]: I1206 06:36:44.221547 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a07d62c-425a-451a-a937-aadc80058570-bundle" (OuterVolumeSpecName: "bundle") pod "3a07d62c-425a-451a-a937-aadc80058570" (UID: "3a07d62c-425a-451a-a937-aadc80058570"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:36:44 crc kubenswrapper[4823]: I1206 06:36:44.223546 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a07d62c-425a-451a-a937-aadc80058570-kube-api-access-htc7f" (OuterVolumeSpecName: "kube-api-access-htc7f") pod "3a07d62c-425a-451a-a937-aadc80058570" (UID: "3a07d62c-425a-451a-a937-aadc80058570"). InnerVolumeSpecName "kube-api-access-htc7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:36:44 crc kubenswrapper[4823]: I1206 06:36:44.228578 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a07d62c-425a-451a-a937-aadc80058570-util" (OuterVolumeSpecName: "util") pod "3a07d62c-425a-451a-a937-aadc80058570" (UID: "3a07d62c-425a-451a-a937-aadc80058570"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:36:44 crc kubenswrapper[4823]: I1206 06:36:44.319983 4823 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a07d62c-425a-451a-a937-aadc80058570-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:36:44 crc kubenswrapper[4823]: I1206 06:36:44.320044 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htc7f\" (UniqueName: \"kubernetes.io/projected/3a07d62c-425a-451a-a937-aadc80058570-kube-api-access-htc7f\") on node \"crc\" DevicePath \"\"" Dec 06 06:36:44 crc kubenswrapper[4823]: I1206 06:36:44.320058 4823 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a07d62c-425a-451a-a937-aadc80058570-util\") on node \"crc\" DevicePath \"\"" Dec 06 06:36:44 crc kubenswrapper[4823]: I1206 06:36:44.891813 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v" event={"ID":"3a07d62c-425a-451a-a937-aadc80058570","Type":"ContainerDied","Data":"a5c703e5024db13d7a29fa4b381ab8d207b7606abd3d6fd68b3be1da5b18f7e8"} Dec 06 06:36:44 crc kubenswrapper[4823]: I1206 06:36:44.891852 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5c703e5024db13d7a29fa4b381ab8d207b7606abd3d6fd68b3be1da5b18f7e8" Dec 06 06:36:44 crc kubenswrapper[4823]: I1206 06:36:44.891862 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v" Dec 06 06:36:57 crc kubenswrapper[4823]: I1206 06:36:57.796157 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-74wlw"] Dec 06 06:36:57 crc kubenswrapper[4823]: E1206 06:36:57.796884 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a07d62c-425a-451a-a937-aadc80058570" containerName="util" Dec 06 06:36:57 crc kubenswrapper[4823]: I1206 06:36:57.796896 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a07d62c-425a-451a-a937-aadc80058570" containerName="util" Dec 06 06:36:57 crc kubenswrapper[4823]: E1206 06:36:57.796910 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a07d62c-425a-451a-a937-aadc80058570" containerName="extract" Dec 06 06:36:57 crc kubenswrapper[4823]: I1206 06:36:57.796917 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a07d62c-425a-451a-a937-aadc80058570" containerName="extract" Dec 06 06:36:57 crc kubenswrapper[4823]: E1206 06:36:57.796927 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a07d62c-425a-451a-a937-aadc80058570" containerName="pull" Dec 06 06:36:57 crc kubenswrapper[4823]: I1206 06:36:57.796935 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a07d62c-425a-451a-a937-aadc80058570" containerName="pull" Dec 06 06:36:57 crc kubenswrapper[4823]: I1206 06:36:57.797082 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a07d62c-425a-451a-a937-aadc80058570" containerName="extract" Dec 06 06:36:57 crc kubenswrapper[4823]: I1206 06:36:57.797527 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-74wlw" Dec 06 06:36:57 crc kubenswrapper[4823]: I1206 06:36:57.804152 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 06 06:36:57 crc kubenswrapper[4823]: I1206 06:36:57.804200 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 06 06:36:57 crc kubenswrapper[4823]: I1206 06:36:57.804676 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-8xdc5" Dec 06 06:36:57 crc kubenswrapper[4823]: I1206 06:36:57.842571 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-74wlw"] Dec 06 06:36:57 crc kubenswrapper[4823]: I1206 06:36:57.933219 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lstkv\" (UniqueName: \"kubernetes.io/projected/f8c8c4c4-eace-4fdb-bad2-2f0cf082c61c-kube-api-access-lstkv\") pod \"obo-prometheus-operator-668cf9dfbb-74wlw\" (UID: \"f8c8c4c4-eace-4fdb-bad2-2f0cf082c61c\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-74wlw" Dec 06 06:36:57 crc kubenswrapper[4823]: I1206 06:36:57.933806 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-gg2gq"] Dec 06 06:36:57 crc kubenswrapper[4823]: I1206 06:36:57.934500 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-gg2gq" Dec 06 06:36:57 crc kubenswrapper[4823]: I1206 06:36:57.937643 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 06 06:36:57 crc kubenswrapper[4823]: I1206 06:36:57.938115 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-pnlrg" Dec 06 06:36:57 crc kubenswrapper[4823]: I1206 06:36:57.962442 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-lksl9"] Dec 06 06:36:57 crc kubenswrapper[4823]: I1206 06:36:57.963264 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-lksl9" Dec 06 06:36:57 crc kubenswrapper[4823]: I1206 06:36:57.969253 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-gg2gq"] Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.034770 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f907eb32-7551-4d4a-b365-cbaa043890b1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f98d949bf-gg2gq\" (UID: \"f907eb32-7551-4d4a-b365-cbaa043890b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-gg2gq" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.034837 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f907eb32-7551-4d4a-b365-cbaa043890b1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f98d949bf-gg2gq\" (UID: \"f907eb32-7551-4d4a-b365-cbaa043890b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-gg2gq" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.034941 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lstkv\" (UniqueName: \"kubernetes.io/projected/f8c8c4c4-eace-4fdb-bad2-2f0cf082c61c-kube-api-access-lstkv\") pod \"obo-prometheus-operator-668cf9dfbb-74wlw\" (UID: \"f8c8c4c4-eace-4fdb-bad2-2f0cf082c61c\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-74wlw" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.036412 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-lksl9"] Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.067289 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lstkv\" (UniqueName: \"kubernetes.io/projected/f8c8c4c4-eace-4fdb-bad2-2f0cf082c61c-kube-api-access-lstkv\") pod \"obo-prometheus-operator-668cf9dfbb-74wlw\" (UID: \"f8c8c4c4-eace-4fdb-bad2-2f0cf082c61c\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-74wlw" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.117470 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-74wlw" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.136966 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f907eb32-7551-4d4a-b365-cbaa043890b1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f98d949bf-gg2gq\" (UID: \"f907eb32-7551-4d4a-b365-cbaa043890b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-gg2gq" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.137064 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fe5f932d-2587-431d-87ff-0c02b2270c11-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f98d949bf-lksl9\" (UID: \"fe5f932d-2587-431d-87ff-0c02b2270c11\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-lksl9" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.137131 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fe5f932d-2587-431d-87ff-0c02b2270c11-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f98d949bf-lksl9\" (UID: \"fe5f932d-2587-431d-87ff-0c02b2270c11\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-lksl9" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.137173 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f907eb32-7551-4d4a-b365-cbaa043890b1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f98d949bf-gg2gq\" (UID: \"f907eb32-7551-4d4a-b365-cbaa043890b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-gg2gq" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.143684 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f907eb32-7551-4d4a-b365-cbaa043890b1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f98d949bf-gg2gq\" (UID: \"f907eb32-7551-4d4a-b365-cbaa043890b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-gg2gq" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.147686 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f907eb32-7551-4d4a-b365-cbaa043890b1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f98d949bf-gg2gq\" (UID: \"f907eb32-7551-4d4a-b365-cbaa043890b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-gg2gq" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.152956 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-6r9cv"] Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.153972 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-6r9cv" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.159589 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-btnhp" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.159835 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.174151 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-6r9cv"] Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.238091 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fe5f932d-2587-431d-87ff-0c02b2270c11-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f98d949bf-lksl9\" (UID: \"fe5f932d-2587-431d-87ff-0c02b2270c11\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-lksl9" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.238162 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fe5f932d-2587-431d-87ff-0c02b2270c11-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f98d949bf-lksl9\" (UID: \"fe5f932d-2587-431d-87ff-0c02b2270c11\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-lksl9" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.252529 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fe5f932d-2587-431d-87ff-0c02b2270c11-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f98d949bf-lksl9\" (UID: \"fe5f932d-2587-431d-87ff-0c02b2270c11\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-lksl9" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.256226 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fe5f932d-2587-431d-87ff-0c02b2270c11-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f98d949bf-lksl9\" (UID: \"fe5f932d-2587-431d-87ff-0c02b2270c11\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-lksl9" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.258297 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-gg2gq" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.292400 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-lksl9" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.329562 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-q2gc2"] Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.330727 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-q2gc2" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.333847 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-h6n8s" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.339534 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm78f\" (UniqueName: \"kubernetes.io/projected/6bb10a2a-8118-4c1f-bc30-d680071b8992-kube-api-access-mm78f\") pod \"observability-operator-d8bb48f5d-6r9cv\" (UID: \"6bb10a2a-8118-4c1f-bc30-d680071b8992\") " pod="openshift-operators/observability-operator-d8bb48f5d-6r9cv" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.339746 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6bb10a2a-8118-4c1f-bc30-d680071b8992-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-6r9cv\" (UID: \"6bb10a2a-8118-4c1f-bc30-d680071b8992\") " pod="openshift-operators/observability-operator-d8bb48f5d-6r9cv" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.357481 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-q2gc2"] Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.455602 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9lr8\" (UniqueName: \"kubernetes.io/projected/66fe6642-822b-4700-b97c-48ef71676514-kube-api-access-f9lr8\") pod \"perses-operator-5446b9c989-q2gc2\" (UID: \"66fe6642-822b-4700-b97c-48ef71676514\") " pod="openshift-operators/perses-operator-5446b9c989-q2gc2" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.455668 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6bb10a2a-8118-4c1f-bc30-d680071b8992-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-6r9cv\" (UID: \"6bb10a2a-8118-4c1f-bc30-d680071b8992\") " pod="openshift-operators/observability-operator-d8bb48f5d-6r9cv" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.455756 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm78f\" (UniqueName: \"kubernetes.io/projected/6bb10a2a-8118-4c1f-bc30-d680071b8992-kube-api-access-mm78f\") pod \"observability-operator-d8bb48f5d-6r9cv\" (UID: \"6bb10a2a-8118-4c1f-bc30-d680071b8992\") " pod="openshift-operators/observability-operator-d8bb48f5d-6r9cv" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.455840 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/66fe6642-822b-4700-b97c-48ef71676514-openshift-service-ca\") pod \"perses-operator-5446b9c989-q2gc2\" (UID: \"66fe6642-822b-4700-b97c-48ef71676514\") " pod="openshift-operators/perses-operator-5446b9c989-q2gc2" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.459270 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6bb10a2a-8118-4c1f-bc30-d680071b8992-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-6r9cv\" (UID: \"6bb10a2a-8118-4c1f-bc30-d680071b8992\") " pod="openshift-operators/observability-operator-d8bb48f5d-6r9cv" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.477474 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm78f\" (UniqueName: \"kubernetes.io/projected/6bb10a2a-8118-4c1f-bc30-d680071b8992-kube-api-access-mm78f\") pod \"observability-operator-d8bb48f5d-6r9cv\" (UID: \"6bb10a2a-8118-4c1f-bc30-d680071b8992\") " pod="openshift-operators/observability-operator-d8bb48f5d-6r9cv" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.512394 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-6r9cv" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.556988 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/66fe6642-822b-4700-b97c-48ef71676514-openshift-service-ca\") pod \"perses-operator-5446b9c989-q2gc2\" (UID: \"66fe6642-822b-4700-b97c-48ef71676514\") " pod="openshift-operators/perses-operator-5446b9c989-q2gc2" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.557054 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9lr8\" (UniqueName: \"kubernetes.io/projected/66fe6642-822b-4700-b97c-48ef71676514-kube-api-access-f9lr8\") pod \"perses-operator-5446b9c989-q2gc2\" (UID: \"66fe6642-822b-4700-b97c-48ef71676514\") " pod="openshift-operators/perses-operator-5446b9c989-q2gc2" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.558164 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/66fe6642-822b-4700-b97c-48ef71676514-openshift-service-ca\") pod \"perses-operator-5446b9c989-q2gc2\" (UID: \"66fe6642-822b-4700-b97c-48ef71676514\") " pod="openshift-operators/perses-operator-5446b9c989-q2gc2" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.586782 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9lr8\" (UniqueName: \"kubernetes.io/projected/66fe6642-822b-4700-b97c-48ef71676514-kube-api-access-f9lr8\") pod \"perses-operator-5446b9c989-q2gc2\" (UID: \"66fe6642-822b-4700-b97c-48ef71676514\") " pod="openshift-operators/perses-operator-5446b9c989-q2gc2" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.588055 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-74wlw"] Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.680979 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-q2gc2" Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.692763 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-gg2gq"] Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.855139 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-6r9cv"] Dec 06 06:36:58 crc kubenswrapper[4823]: W1206 06:36:58.861521 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bb10a2a_8118_4c1f_bc30_d680071b8992.slice/crio-2e2a47980f6951e63b33106c84e7a4d92944a916bb44add1f6697866fe0fa24d WatchSource:0}: Error finding container 2e2a47980f6951e63b33106c84e7a4d92944a916bb44add1f6697866fe0fa24d: Status 404 returned error can't find the container with id 2e2a47980f6951e63b33106c84e7a4d92944a916bb44add1f6697866fe0fa24d Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.941486 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-q2gc2"] Dec 06 06:36:58 crc kubenswrapper[4823]: W1206 06:36:58.947553 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66fe6642_822b_4700_b97c_48ef71676514.slice/crio-96bd20d14f5459ce287cae25f6fc3cebfd113305d16ce7245f1290bb0b423014 WatchSource:0}: Error finding container 96bd20d14f5459ce287cae25f6fc3cebfd113305d16ce7245f1290bb0b423014: Status 404 returned error can't find the container with id 96bd20d14f5459ce287cae25f6fc3cebfd113305d16ce7245f1290bb0b423014 Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.975380 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-gg2gq" event={"ID":"f907eb32-7551-4d4a-b365-cbaa043890b1","Type":"ContainerStarted","Data":"59859aae7e3c0539e66e80227efa021f5782e8bee6c47feb67cde67b92f73d17"} Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.975576 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-lksl9"] Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.977708 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-q2gc2" event={"ID":"66fe6642-822b-4700-b97c-48ef71676514","Type":"ContainerStarted","Data":"96bd20d14f5459ce287cae25f6fc3cebfd113305d16ce7245f1290bb0b423014"} Dec 06 06:36:58 crc kubenswrapper[4823]: W1206 06:36:58.978828 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe5f932d_2587_431d_87ff_0c02b2270c11.slice/crio-006a51904f1f3fd4d22029ace7d27bdf84336d41c1083adfb0e35d366afc1f75 WatchSource:0}: Error finding container 006a51904f1f3fd4d22029ace7d27bdf84336d41c1083adfb0e35d366afc1f75: Status 404 returned error can't find the container with id 006a51904f1f3fd4d22029ace7d27bdf84336d41c1083adfb0e35d366afc1f75 Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.979081 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-6r9cv" event={"ID":"6bb10a2a-8118-4c1f-bc30-d680071b8992","Type":"ContainerStarted","Data":"2e2a47980f6951e63b33106c84e7a4d92944a916bb44add1f6697866fe0fa24d"} Dec 06 06:36:58 crc kubenswrapper[4823]: I1206 06:36:58.980619 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-74wlw" event={"ID":"f8c8c4c4-eace-4fdb-bad2-2f0cf082c61c","Type":"ContainerStarted","Data":"5bb6f3ec55f0d9aa45065a3894cbba042044b82912ba2e47681f6af4e46922bd"} Dec 06 06:36:59 crc kubenswrapper[4823]: I1206 06:36:59.991219 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-lksl9" event={"ID":"fe5f932d-2587-431d-87ff-0c02b2270c11","Type":"ContainerStarted","Data":"006a51904f1f3fd4d22029ace7d27bdf84336d41c1083adfb0e35d366afc1f75"} Dec 06 06:37:06 crc kubenswrapper[4823]: I1206 06:37:06.051722 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:37:06 crc kubenswrapper[4823]: I1206 06:37:06.052005 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:37:18 crc kubenswrapper[4823]: E1206 06:37:18.492507 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb" Dec 06 06:37:18 crc kubenswrapper[4823]: E1206 06:37:18.493231 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb,Command:[],Args:[--namespace=$(NAMESPACE) --images=perses=$(RELATED_IMAGE_PERSES) --images=alertmanager=$(RELATED_IMAGE_ALERTMANAGER) --images=prometheus=$(RELATED_IMAGE_PROMETHEUS) --images=thanos=$(RELATED_IMAGE_THANOS) --images=ui-dashboards=$(RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN) --images=ui-distributed-tracing=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN) --images=ui-distributed-tracing-pf5=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5) --images=ui-distributed-tracing-pf4=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4) --images=ui-logging=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN) --images=ui-logging-pf4=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4) --images=ui-troubleshooting-panel=$(RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN) --images=ui-monitoring=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN) --images=ui-monitoring-pf5=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5) --images=korrel8r=$(RELATED_IMAGE_KORREL8R) --images=health-analyzer=$(RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER) --openshift.enabled=true],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:RELATED_IMAGE_ALERTMANAGER,Value:registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:e718854a7d6ca8accf0fa72db0eb902e46c44d747ad51dc3f06bba0cefaa3c01,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS,Value:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:17ea20be390a94ab39f5cdd7f0cbc2498046eebcf77fe3dec9aa288d5c2cf46b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_THANOS,Value:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:d972f4faa5e9c121402d23ed85002f26af48ec36b1b71a7489d677b3913d08b4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PERSES,Value:registry.redhat.io/cluster-observability-operator/perses-rhel9@sha256:91531137fc1dcd740e277e0f65e120a0176a16f788c14c27925b61aa0b792ade,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:a69da8bbca8a28dd2925f864d51cc31cf761b10532c553095ba40b242ef701cb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-rhel9@sha256:897e1bfad1187062725b54d87107bd0155972257a50d8335dd29e1999b828a4f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf5-rhel9@sha256:95fe5b5746ca8c07ac9217ce2d8ac8e6afad17af210f9d8e0074df1310b209a8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf4-rhel9@sha256:e9d9a89e4d8126a62b1852055482258ee528cac6398dd5d43ebad75ace0f33c9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-rhel9@sha256:ec684a0645ceb917b019af7ddba68c3533416e356ab0d0320a30e75ca7ebb31b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-pf4-rhel9@sha256:3b9693fcde9b3a9494fb04735b1f7cfd0426f10be820fdc3f024175c0d3df1c9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/troubleshooting-panel-console-plugin-rhel9@sha256:580606f194180accc8abba099e17a26dca7522ec6d233fa2fdd40312771703e3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-rhel9@sha256:e03777be39e71701935059cd877603874a13ac94daa73219d4e5e545599d78a9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-pf5-rhel9@sha256:aa47256193cfd2877853878e1ae97d2ab8b8e5deae62b387cbfad02b284d379c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KORREL8R,Value:registry.redhat.io/cluster-observability-operator/korrel8r-rhel9@sha256:c595ff56b2cb85514bf4784db6ddb82e4e657e3e708a7fb695fc4997379a94d4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER,Value:registry.redhat.io/cluster-observability-operator/cluster-health-analyzer-rhel9@sha256:45a4ec2a519bcec99e886aa91596d5356a2414a2bd103baaef9fa7838c672eb2,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{400 -3} {} 400m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:observability-operator-tls,ReadOnly:true,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mm78f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-operator-d8bb48f5d-6r9cv_openshift-operators(6bb10a2a-8118-4c1f-bc30-d680071b8992): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 06:37:18 crc kubenswrapper[4823]: E1206 06:37:18.495255 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-operator-d8bb48f5d-6r9cv" podUID="6bb10a2a-8118-4c1f-bc30-d680071b8992" Dec 06 06:37:19 crc kubenswrapper[4823]: E1206 06:37:19.041716 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" Dec 06 06:37:19 crc kubenswrapper[4823]: E1206 06:37:19.041903 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-7f98d949bf-lksl9_openshift-operators(fe5f932d-2587-431d-87ff-0c02b2270c11): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 06:37:19 crc kubenswrapper[4823]: E1206 06:37:19.043159 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-lksl9" podUID="fe5f932d-2587-431d-87ff-0c02b2270c11" Dec 06 06:37:19 crc kubenswrapper[4823]: E1206 06:37:19.054259 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" Dec 06 06:37:19 crc kubenswrapper[4823]: E1206 06:37:19.054394 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-7f98d949bf-gg2gq_openshift-operators(f907eb32-7551-4d4a-b365-cbaa043890b1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 06:37:19 crc kubenswrapper[4823]: E1206 06:37:19.055604 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-gg2gq" podUID="f907eb32-7551-4d4a-b365-cbaa043890b1" Dec 06 06:37:19 crc kubenswrapper[4823]: E1206 06:37:19.230181 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-lksl9" podUID="fe5f932d-2587-431d-87ff-0c02b2270c11" Dec 06 06:37:19 crc kubenswrapper[4823]: E1206 06:37:19.230236 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-gg2gq" podUID="f907eb32-7551-4d4a-b365-cbaa043890b1" Dec 06 06:37:19 crc kubenswrapper[4823]: E1206 06:37:19.230292 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb\\\"\"" pod="openshift-operators/observability-operator-d8bb48f5d-6r9cv" podUID="6bb10a2a-8118-4c1f-bc30-d680071b8992" Dec 06 06:37:20 crc kubenswrapper[4823]: I1206 06:37:20.234177 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-q2gc2" event={"ID":"66fe6642-822b-4700-b97c-48ef71676514","Type":"ContainerStarted","Data":"c4e442fb640547238bb5370adf832e8824b84a18b39a332bc6358e1e98fab492"} Dec 06 06:37:20 crc kubenswrapper[4823]: I1206 06:37:20.234444 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-q2gc2" Dec 06 06:37:20 crc kubenswrapper[4823]: I1206 06:37:20.236629 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-74wlw" event={"ID":"f8c8c4c4-eace-4fdb-bad2-2f0cf082c61c","Type":"ContainerStarted","Data":"eeda0eb3a090bdee4cf24ab58dd70c642b601e8ca90ec3cbd2f05bf1c8b24b78"} Dec 06 06:37:20 crc kubenswrapper[4823]: I1206 06:37:20.265041 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-q2gc2" podStartSLOduration=2.167228899 podStartE2EDuration="22.265025725s" podCreationTimestamp="2025-12-06 06:36:58 +0000 UTC" firstStartedPulling="2025-12-06 06:36:58.95049366 +0000 UTC m=+720.236245620" lastFinishedPulling="2025-12-06 06:37:19.048290486 +0000 UTC m=+740.334042446" observedRunningTime="2025-12-06 06:37:20.2614291 +0000 UTC m=+741.547181060" watchObservedRunningTime="2025-12-06 06:37:20.265025725 +0000 UTC m=+741.550777685" Dec 06 06:37:20 crc kubenswrapper[4823]: I1206 06:37:20.281681 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-74wlw" podStartSLOduration=2.865609974 podStartE2EDuration="23.281650826s" podCreationTimestamp="2025-12-06 06:36:57 +0000 UTC" firstStartedPulling="2025-12-06 06:36:58.636573279 +0000 UTC m=+719.922325249" lastFinishedPulling="2025-12-06 06:37:19.052614141 +0000 UTC m=+740.338366101" observedRunningTime="2025-12-06 06:37:20.278307939 +0000 UTC m=+741.564059899" watchObservedRunningTime="2025-12-06 06:37:20.281650826 +0000 UTC m=+741.567402796" Dec 06 06:37:28 crc kubenswrapper[4823]: I1206 06:37:28.683759 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-q2gc2" Dec 06 06:37:32 crc kubenswrapper[4823]: I1206 06:37:32.303303 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-lksl9" event={"ID":"fe5f932d-2587-431d-87ff-0c02b2270c11","Type":"ContainerStarted","Data":"15a9d2bab771bd7d8383bdf4c610b40f831e56c5a62434f7b9dfd7d500a99d1a"} Dec 06 06:37:32 crc kubenswrapper[4823]: I1206 06:37:32.323402 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-lksl9" podStartSLOduration=2.476767201 podStartE2EDuration="35.323378418s" podCreationTimestamp="2025-12-06 06:36:57 +0000 UTC" firstStartedPulling="2025-12-06 06:36:58.981434456 +0000 UTC m=+720.267186416" lastFinishedPulling="2025-12-06 06:37:31.828045673 +0000 UTC m=+753.113797633" observedRunningTime="2025-12-06 06:37:32.320002 +0000 UTC m=+753.605754060" watchObservedRunningTime="2025-12-06 06:37:32.323378418 +0000 UTC m=+753.609130378" Dec 06 06:37:33 crc kubenswrapper[4823]: I1206 06:37:33.311057 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-gg2gq" event={"ID":"f907eb32-7551-4d4a-b365-cbaa043890b1","Type":"ContainerStarted","Data":"bddb679029aef6b0bdbb974c9f0bc7c079bb5cf85dc1c2329e1461706dc7b550"} Dec 06 06:37:33 crc kubenswrapper[4823]: I1206 06:37:33.332172 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f98d949bf-gg2gq" podStartSLOduration=-9223372000.52262 podStartE2EDuration="36.332156474s" podCreationTimestamp="2025-12-06 06:36:57 +0000 UTC" firstStartedPulling="2025-12-06 06:36:58.721885359 +0000 UTC m=+720.007637319" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:37:33.330700981 +0000 UTC m=+754.616452931" watchObservedRunningTime="2025-12-06 06:37:33.332156474 +0000 UTC m=+754.617908434" Dec 06 06:37:35 crc kubenswrapper[4823]: I1206 06:37:35.322977 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-6r9cv" event={"ID":"6bb10a2a-8118-4c1f-bc30-d680071b8992","Type":"ContainerStarted","Data":"b86a7c371a1687763634cb4672799e7bbe0ecda042afccb9a432e2b6128f8700"} Dec 06 06:37:35 crc kubenswrapper[4823]: I1206 06:37:35.323469 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-6r9cv" Dec 06 06:37:35 crc kubenswrapper[4823]: I1206 06:37:35.324506 4823 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-6r9cv container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.30:8081/healthz\": dial tcp 10.217.0.30:8081: connect: connection refused" start-of-body= Dec 06 06:37:35 crc kubenswrapper[4823]: I1206 06:37:35.324566 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-d8bb48f5d-6r9cv" podUID="6bb10a2a-8118-4c1f-bc30-d680071b8992" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.30:8081/healthz\": dial tcp 10.217.0.30:8081: connect: connection refused" Dec 06 06:37:36 crc kubenswrapper[4823]: I1206 06:37:36.052441 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:37:36 crc kubenswrapper[4823]: I1206 06:37:36.052516 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:37:36 crc kubenswrapper[4823]: I1206 06:37:36.418350 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-6r9cv" Dec 06 06:37:36 crc kubenswrapper[4823]: I1206 06:37:36.437538 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-6r9cv" podStartSLOduration=2.340756624 podStartE2EDuration="38.437524019s" podCreationTimestamp="2025-12-06 06:36:58 +0000 UTC" firstStartedPulling="2025-12-06 06:36:58.864368066 +0000 UTC m=+720.150120026" lastFinishedPulling="2025-12-06 06:37:34.961135461 +0000 UTC m=+756.246887421" observedRunningTime="2025-12-06 06:37:35.345424311 +0000 UTC m=+756.631176271" watchObservedRunningTime="2025-12-06 06:37:36.437524019 +0000 UTC m=+757.723275979" Dec 06 06:37:48 crc kubenswrapper[4823]: I1206 06:37:48.506551 4823 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 06 06:37:54 crc kubenswrapper[4823]: I1206 06:37:54.572113 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6"] Dec 06 06:37:54 crc kubenswrapper[4823]: I1206 06:37:54.574310 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6" Dec 06 06:37:54 crc kubenswrapper[4823]: I1206 06:37:54.576194 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 06 06:37:54 crc kubenswrapper[4823]: I1206 06:37:54.582611 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6"] Dec 06 06:37:54 crc kubenswrapper[4823]: I1206 06:37:54.751390 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd9gl\" (UniqueName: \"kubernetes.io/projected/bf6c7416-ec1a-4c0d-97ac-6f1a1c618788-kube-api-access-hd9gl\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6\" (UID: \"bf6c7416-ec1a-4c0d-97ac-6f1a1c618788\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6" Dec 06 06:37:54 crc kubenswrapper[4823]: I1206 06:37:54.751504 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bf6c7416-ec1a-4c0d-97ac-6f1a1c618788-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6\" (UID: \"bf6c7416-ec1a-4c0d-97ac-6f1a1c618788\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6" Dec 06 06:37:54 crc kubenswrapper[4823]: I1206 06:37:54.751552 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bf6c7416-ec1a-4c0d-97ac-6f1a1c618788-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6\" (UID: \"bf6c7416-ec1a-4c0d-97ac-6f1a1c618788\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6" Dec 06 06:37:54 crc kubenswrapper[4823]: I1206 06:37:54.852762 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bf6c7416-ec1a-4c0d-97ac-6f1a1c618788-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6\" (UID: \"bf6c7416-ec1a-4c0d-97ac-6f1a1c618788\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6" Dec 06 06:37:54 crc kubenswrapper[4823]: I1206 06:37:54.852824 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd9gl\" (UniqueName: \"kubernetes.io/projected/bf6c7416-ec1a-4c0d-97ac-6f1a1c618788-kube-api-access-hd9gl\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6\" (UID: \"bf6c7416-ec1a-4c0d-97ac-6f1a1c618788\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6" Dec 06 06:37:54 crc kubenswrapper[4823]: I1206 06:37:54.852892 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bf6c7416-ec1a-4c0d-97ac-6f1a1c618788-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6\" (UID: \"bf6c7416-ec1a-4c0d-97ac-6f1a1c618788\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6" Dec 06 06:37:54 crc kubenswrapper[4823]: I1206 06:37:54.853371 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bf6c7416-ec1a-4c0d-97ac-6f1a1c618788-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6\" (UID: \"bf6c7416-ec1a-4c0d-97ac-6f1a1c618788\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6" Dec 06 06:37:54 crc kubenswrapper[4823]: I1206 06:37:54.853412 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bf6c7416-ec1a-4c0d-97ac-6f1a1c618788-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6\" (UID: \"bf6c7416-ec1a-4c0d-97ac-6f1a1c618788\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6" Dec 06 06:37:54 crc kubenswrapper[4823]: I1206 06:37:54.873002 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd9gl\" (UniqueName: \"kubernetes.io/projected/bf6c7416-ec1a-4c0d-97ac-6f1a1c618788-kube-api-access-hd9gl\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6\" (UID: \"bf6c7416-ec1a-4c0d-97ac-6f1a1c618788\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6" Dec 06 06:37:54 crc kubenswrapper[4823]: I1206 06:37:54.889739 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6" Dec 06 06:37:55 crc kubenswrapper[4823]: I1206 06:37:55.241284 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6"] Dec 06 06:37:56 crc kubenswrapper[4823]: I1206 06:37:56.090757 4823 generic.go:334] "Generic (PLEG): container finished" podID="bf6c7416-ec1a-4c0d-97ac-6f1a1c618788" containerID="89f3aba74c3c0c08b12f2915ff6669b3d4e933aaea8cc894dcc762f6dd598544" exitCode=0 Dec 06 06:37:56 crc kubenswrapper[4823]: I1206 06:37:56.090842 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6" event={"ID":"bf6c7416-ec1a-4c0d-97ac-6f1a1c618788","Type":"ContainerDied","Data":"89f3aba74c3c0c08b12f2915ff6669b3d4e933aaea8cc894dcc762f6dd598544"} Dec 06 06:37:56 crc kubenswrapper[4823]: I1206 06:37:56.091101 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6" event={"ID":"bf6c7416-ec1a-4c0d-97ac-6f1a1c618788","Type":"ContainerStarted","Data":"aab11f85b88e7763e4ffca10c1b62f250710ab9d7b360e0736787e5c0235e323"} Dec 06 06:37:56 crc kubenswrapper[4823]: I1206 06:37:56.922155 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2d4bj"] Dec 06 06:37:56 crc kubenswrapper[4823]: I1206 06:37:56.923288 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2d4bj" Dec 06 06:37:56 crc kubenswrapper[4823]: I1206 06:37:56.931153 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2d4bj"] Dec 06 06:37:57 crc kubenswrapper[4823]: I1206 06:37:57.080044 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88aff8e-c748-4b1e-bfb1-65efd1c35b7c-utilities\") pod \"redhat-operators-2d4bj\" (UID: \"b88aff8e-c748-4b1e-bfb1-65efd1c35b7c\") " pod="openshift-marketplace/redhat-operators-2d4bj" Dec 06 06:37:57 crc kubenswrapper[4823]: I1206 06:37:57.080106 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l29xr\" (UniqueName: \"kubernetes.io/projected/b88aff8e-c748-4b1e-bfb1-65efd1c35b7c-kube-api-access-l29xr\") pod \"redhat-operators-2d4bj\" (UID: \"b88aff8e-c748-4b1e-bfb1-65efd1c35b7c\") " pod="openshift-marketplace/redhat-operators-2d4bj" Dec 06 06:37:57 crc kubenswrapper[4823]: I1206 06:37:57.080161 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88aff8e-c748-4b1e-bfb1-65efd1c35b7c-catalog-content\") pod \"redhat-operators-2d4bj\" (UID: \"b88aff8e-c748-4b1e-bfb1-65efd1c35b7c\") " pod="openshift-marketplace/redhat-operators-2d4bj" Dec 06 06:37:57 crc kubenswrapper[4823]: I1206 06:37:57.180877 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88aff8e-c748-4b1e-bfb1-65efd1c35b7c-utilities\") pod \"redhat-operators-2d4bj\" (UID: \"b88aff8e-c748-4b1e-bfb1-65efd1c35b7c\") " pod="openshift-marketplace/redhat-operators-2d4bj" Dec 06 06:37:57 crc kubenswrapper[4823]: I1206 06:37:57.180933 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l29xr\" (UniqueName: \"kubernetes.io/projected/b88aff8e-c748-4b1e-bfb1-65efd1c35b7c-kube-api-access-l29xr\") pod \"redhat-operators-2d4bj\" (UID: \"b88aff8e-c748-4b1e-bfb1-65efd1c35b7c\") " pod="openshift-marketplace/redhat-operators-2d4bj" Dec 06 06:37:57 crc kubenswrapper[4823]: I1206 06:37:57.180984 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88aff8e-c748-4b1e-bfb1-65efd1c35b7c-catalog-content\") pod \"redhat-operators-2d4bj\" (UID: \"b88aff8e-c748-4b1e-bfb1-65efd1c35b7c\") " pod="openshift-marketplace/redhat-operators-2d4bj" Dec 06 06:37:57 crc kubenswrapper[4823]: I1206 06:37:57.181576 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88aff8e-c748-4b1e-bfb1-65efd1c35b7c-catalog-content\") pod \"redhat-operators-2d4bj\" (UID: \"b88aff8e-c748-4b1e-bfb1-65efd1c35b7c\") " pod="openshift-marketplace/redhat-operators-2d4bj" Dec 06 06:37:57 crc kubenswrapper[4823]: I1206 06:37:57.181623 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88aff8e-c748-4b1e-bfb1-65efd1c35b7c-utilities\") pod \"redhat-operators-2d4bj\" (UID: \"b88aff8e-c748-4b1e-bfb1-65efd1c35b7c\") " pod="openshift-marketplace/redhat-operators-2d4bj" Dec 06 06:37:57 crc kubenswrapper[4823]: I1206 06:37:57.201022 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l29xr\" (UniqueName: \"kubernetes.io/projected/b88aff8e-c748-4b1e-bfb1-65efd1c35b7c-kube-api-access-l29xr\") pod \"redhat-operators-2d4bj\" (UID: \"b88aff8e-c748-4b1e-bfb1-65efd1c35b7c\") " pod="openshift-marketplace/redhat-operators-2d4bj" Dec 06 06:37:57 crc kubenswrapper[4823]: I1206 06:37:57.276586 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2d4bj" Dec 06 06:37:57 crc kubenswrapper[4823]: I1206 06:37:57.549331 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2d4bj"] Dec 06 06:37:57 crc kubenswrapper[4823]: W1206 06:37:57.559599 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb88aff8e_c748_4b1e_bfb1_65efd1c35b7c.slice/crio-e36f942879bbbd8e7138658d3a5cdc50323f2e0d4a4a5023e99446ee845e7c04 WatchSource:0}: Error finding container e36f942879bbbd8e7138658d3a5cdc50323f2e0d4a4a5023e99446ee845e7c04: Status 404 returned error can't find the container with id e36f942879bbbd8e7138658d3a5cdc50323f2e0d4a4a5023e99446ee845e7c04 Dec 06 06:37:58 crc kubenswrapper[4823]: I1206 06:37:58.104051 4823 generic.go:334] "Generic (PLEG): container finished" podID="b88aff8e-c748-4b1e-bfb1-65efd1c35b7c" containerID="4b07551bf0039dd2f3a8d7be4e93e932b7a0248fbf729eb698633aa29818891f" exitCode=0 Dec 06 06:37:58 crc kubenswrapper[4823]: I1206 06:37:58.104395 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2d4bj" event={"ID":"b88aff8e-c748-4b1e-bfb1-65efd1c35b7c","Type":"ContainerDied","Data":"4b07551bf0039dd2f3a8d7be4e93e932b7a0248fbf729eb698633aa29818891f"} Dec 06 06:37:58 crc kubenswrapper[4823]: I1206 06:37:58.104423 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2d4bj" event={"ID":"b88aff8e-c748-4b1e-bfb1-65efd1c35b7c","Type":"ContainerStarted","Data":"e36f942879bbbd8e7138658d3a5cdc50323f2e0d4a4a5023e99446ee845e7c04"} Dec 06 06:37:58 crc kubenswrapper[4823]: I1206 06:37:58.109375 4823 generic.go:334] "Generic (PLEG): container finished" podID="bf6c7416-ec1a-4c0d-97ac-6f1a1c618788" containerID="679c28e0cbac8ef40d46ce2c67a276fdde7ff4b5c5a8c97ecfe77f7c31a87f02" exitCode=0 Dec 06 06:37:58 crc kubenswrapper[4823]: I1206 06:37:58.109422 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6" event={"ID":"bf6c7416-ec1a-4c0d-97ac-6f1a1c618788","Type":"ContainerDied","Data":"679c28e0cbac8ef40d46ce2c67a276fdde7ff4b5c5a8c97ecfe77f7c31a87f02"} Dec 06 06:37:59 crc kubenswrapper[4823]: I1206 06:37:59.118086 4823 generic.go:334] "Generic (PLEG): container finished" podID="bf6c7416-ec1a-4c0d-97ac-6f1a1c618788" containerID="0101cc3a2977665bbb2be60c55b286b436da4abf7e283aab91087921c0ef2e82" exitCode=0 Dec 06 06:37:59 crc kubenswrapper[4823]: I1206 06:37:59.118502 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6" event={"ID":"bf6c7416-ec1a-4c0d-97ac-6f1a1c618788","Type":"ContainerDied","Data":"0101cc3a2977665bbb2be60c55b286b436da4abf7e283aab91087921c0ef2e82"} Dec 06 06:37:59 crc kubenswrapper[4823]: I1206 06:37:59.120608 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2d4bj" event={"ID":"b88aff8e-c748-4b1e-bfb1-65efd1c35b7c","Type":"ContainerStarted","Data":"b5ae761145309460ac05913b4d653f07daa336d3f2df8bbf8fa84de835e8c690"} Dec 06 06:38:00 crc kubenswrapper[4823]: I1206 06:38:00.988727 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6" Dec 06 06:38:01 crc kubenswrapper[4823]: I1206 06:38:01.028120 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bf6c7416-ec1a-4c0d-97ac-6f1a1c618788-bundle\") pod \"bf6c7416-ec1a-4c0d-97ac-6f1a1c618788\" (UID: \"bf6c7416-ec1a-4c0d-97ac-6f1a1c618788\") " Dec 06 06:38:01 crc kubenswrapper[4823]: I1206 06:38:01.028271 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bf6c7416-ec1a-4c0d-97ac-6f1a1c618788-util\") pod \"bf6c7416-ec1a-4c0d-97ac-6f1a1c618788\" (UID: \"bf6c7416-ec1a-4c0d-97ac-6f1a1c618788\") " Dec 06 06:38:01 crc kubenswrapper[4823]: I1206 06:38:01.028340 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd9gl\" (UniqueName: \"kubernetes.io/projected/bf6c7416-ec1a-4c0d-97ac-6f1a1c618788-kube-api-access-hd9gl\") pod \"bf6c7416-ec1a-4c0d-97ac-6f1a1c618788\" (UID: \"bf6c7416-ec1a-4c0d-97ac-6f1a1c618788\") " Dec 06 06:38:01 crc kubenswrapper[4823]: I1206 06:38:01.028909 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf6c7416-ec1a-4c0d-97ac-6f1a1c618788-bundle" (OuterVolumeSpecName: "bundle") pod "bf6c7416-ec1a-4c0d-97ac-6f1a1c618788" (UID: "bf6c7416-ec1a-4c0d-97ac-6f1a1c618788"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:38:01 crc kubenswrapper[4823]: I1206 06:38:01.034760 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf6c7416-ec1a-4c0d-97ac-6f1a1c618788-kube-api-access-hd9gl" (OuterVolumeSpecName: "kube-api-access-hd9gl") pod "bf6c7416-ec1a-4c0d-97ac-6f1a1c618788" (UID: "bf6c7416-ec1a-4c0d-97ac-6f1a1c618788"). InnerVolumeSpecName "kube-api-access-hd9gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:38:01 crc kubenswrapper[4823]: I1206 06:38:01.038305 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf6c7416-ec1a-4c0d-97ac-6f1a1c618788-util" (OuterVolumeSpecName: "util") pod "bf6c7416-ec1a-4c0d-97ac-6f1a1c618788" (UID: "bf6c7416-ec1a-4c0d-97ac-6f1a1c618788"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:38:01 crc kubenswrapper[4823]: I1206 06:38:01.129821 4823 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bf6c7416-ec1a-4c0d-97ac-6f1a1c618788-util\") on node \"crc\" DevicePath \"\"" Dec 06 06:38:01 crc kubenswrapper[4823]: I1206 06:38:01.129853 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd9gl\" (UniqueName: \"kubernetes.io/projected/bf6c7416-ec1a-4c0d-97ac-6f1a1c618788-kube-api-access-hd9gl\") on node \"crc\" DevicePath \"\"" Dec 06 06:38:01 crc kubenswrapper[4823]: I1206 06:38:01.129864 4823 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bf6c7416-ec1a-4c0d-97ac-6f1a1c618788-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:38:01 crc kubenswrapper[4823]: I1206 06:38:01.136067 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6" event={"ID":"bf6c7416-ec1a-4c0d-97ac-6f1a1c618788","Type":"ContainerDied","Data":"aab11f85b88e7763e4ffca10c1b62f250710ab9d7b360e0736787e5c0235e323"} Dec 06 06:38:01 crc kubenswrapper[4823]: I1206 06:38:01.136103 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aab11f85b88e7763e4ffca10c1b62f250710ab9d7b360e0736787e5c0235e323" Dec 06 06:38:01 crc kubenswrapper[4823]: I1206 06:38:01.136146 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6" Dec 06 06:38:02 crc kubenswrapper[4823]: I1206 06:38:02.142595 4823 generic.go:334] "Generic (PLEG): container finished" podID="b88aff8e-c748-4b1e-bfb1-65efd1c35b7c" containerID="b5ae761145309460ac05913b4d653f07daa336d3f2df8bbf8fa84de835e8c690" exitCode=0 Dec 06 06:38:02 crc kubenswrapper[4823]: I1206 06:38:02.142643 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2d4bj" event={"ID":"b88aff8e-c748-4b1e-bfb1-65efd1c35b7c","Type":"ContainerDied","Data":"b5ae761145309460ac05913b4d653f07daa336d3f2df8bbf8fa84de835e8c690"} Dec 06 06:38:03 crc kubenswrapper[4823]: I1206 06:38:03.241312 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2d4bj" event={"ID":"b88aff8e-c748-4b1e-bfb1-65efd1c35b7c","Type":"ContainerStarted","Data":"623d1dd7f25dd96905d37de847f762357da7fa3e98d075978c96cb78cefaba28"} Dec 06 06:38:03 crc kubenswrapper[4823]: I1206 06:38:03.265716 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2d4bj" podStartSLOduration=2.814633079 podStartE2EDuration="7.265694643s" podCreationTimestamp="2025-12-06 06:37:56 +0000 UTC" firstStartedPulling="2025-12-06 06:37:58.105739796 +0000 UTC m=+779.391491756" lastFinishedPulling="2025-12-06 06:38:02.55680136 +0000 UTC m=+783.842553320" observedRunningTime="2025-12-06 06:38:03.2631822 +0000 UTC m=+784.548934160" watchObservedRunningTime="2025-12-06 06:38:03.265694643 +0000 UTC m=+784.551446633" Dec 06 06:38:04 crc kubenswrapper[4823]: I1206 06:38:04.037596 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-z7fqk"] Dec 06 06:38:04 crc kubenswrapper[4823]: E1206 06:38:04.037843 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf6c7416-ec1a-4c0d-97ac-6f1a1c618788" containerName="pull" Dec 06 06:38:04 crc kubenswrapper[4823]: I1206 06:38:04.037854 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf6c7416-ec1a-4c0d-97ac-6f1a1c618788" containerName="pull" Dec 06 06:38:04 crc kubenswrapper[4823]: E1206 06:38:04.037865 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf6c7416-ec1a-4c0d-97ac-6f1a1c618788" containerName="extract" Dec 06 06:38:04 crc kubenswrapper[4823]: I1206 06:38:04.037871 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf6c7416-ec1a-4c0d-97ac-6f1a1c618788" containerName="extract" Dec 06 06:38:04 crc kubenswrapper[4823]: E1206 06:38:04.037888 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf6c7416-ec1a-4c0d-97ac-6f1a1c618788" containerName="util" Dec 06 06:38:04 crc kubenswrapper[4823]: I1206 06:38:04.037895 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf6c7416-ec1a-4c0d-97ac-6f1a1c618788" containerName="util" Dec 06 06:38:04 crc kubenswrapper[4823]: I1206 06:38:04.037999 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf6c7416-ec1a-4c0d-97ac-6f1a1c618788" containerName="extract" Dec 06 06:38:04 crc kubenswrapper[4823]: I1206 06:38:04.038422 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-z7fqk" Dec 06 06:38:04 crc kubenswrapper[4823]: I1206 06:38:04.040778 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 06 06:38:04 crc kubenswrapper[4823]: I1206 06:38:04.040976 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 06 06:38:04 crc kubenswrapper[4823]: I1206 06:38:04.041102 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-55kff" Dec 06 06:38:04 crc kubenswrapper[4823]: I1206 06:38:04.044085 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fgjh\" (UniqueName: \"kubernetes.io/projected/e25db4e0-af48-4315-b9f0-ee0a2d774e46-kube-api-access-7fgjh\") pod \"nmstate-operator-5b5b58f5c8-z7fqk\" (UID: \"e25db4e0-af48-4315-b9f0-ee0a2d774e46\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-z7fqk" Dec 06 06:38:04 crc kubenswrapper[4823]: I1206 06:38:04.051673 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-z7fqk"] Dec 06 06:38:04 crc kubenswrapper[4823]: I1206 06:38:04.145715 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fgjh\" (UniqueName: \"kubernetes.io/projected/e25db4e0-af48-4315-b9f0-ee0a2d774e46-kube-api-access-7fgjh\") pod \"nmstate-operator-5b5b58f5c8-z7fqk\" (UID: \"e25db4e0-af48-4315-b9f0-ee0a2d774e46\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-z7fqk" Dec 06 06:38:04 crc kubenswrapper[4823]: I1206 06:38:04.166044 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fgjh\" (UniqueName: \"kubernetes.io/projected/e25db4e0-af48-4315-b9f0-ee0a2d774e46-kube-api-access-7fgjh\") pod \"nmstate-operator-5b5b58f5c8-z7fqk\" (UID: \"e25db4e0-af48-4315-b9f0-ee0a2d774e46\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-z7fqk" Dec 06 06:38:04 crc kubenswrapper[4823]: I1206 06:38:04.353732 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-z7fqk" Dec 06 06:38:04 crc kubenswrapper[4823]: I1206 06:38:04.704083 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-z7fqk"] Dec 06 06:38:05 crc kubenswrapper[4823]: I1206 06:38:05.254386 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-z7fqk" event={"ID":"e25db4e0-af48-4315-b9f0-ee0a2d774e46","Type":"ContainerStarted","Data":"465adf7985f395fa1f70a8aa91cfb22f8896af101a16f19d5dfac3b70a30580f"} Dec 06 06:38:06 crc kubenswrapper[4823]: I1206 06:38:06.051411 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:38:06 crc kubenswrapper[4823]: I1206 06:38:06.052396 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:38:06 crc kubenswrapper[4823]: I1206 06:38:06.052518 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 06:38:06 crc kubenswrapper[4823]: I1206 06:38:06.053258 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5eadb100f9de392020e8ad9c0c80f79bb4ee89b08a0b99aaf32660b2052224b2"} pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 06:38:06 crc kubenswrapper[4823]: I1206 06:38:06.053398 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" containerID="cri-o://5eadb100f9de392020e8ad9c0c80f79bb4ee89b08a0b99aaf32660b2052224b2" gracePeriod=600 Dec 06 06:38:07 crc kubenswrapper[4823]: I1206 06:38:07.267016 4823 generic.go:334] "Generic (PLEG): container finished" podID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerID="5eadb100f9de392020e8ad9c0c80f79bb4ee89b08a0b99aaf32660b2052224b2" exitCode=0 Dec 06 06:38:07 crc kubenswrapper[4823]: I1206 06:38:07.267066 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerDied","Data":"5eadb100f9de392020e8ad9c0c80f79bb4ee89b08a0b99aaf32660b2052224b2"} Dec 06 06:38:07 crc kubenswrapper[4823]: I1206 06:38:07.267377 4823 scope.go:117] "RemoveContainer" containerID="6548b8fc0740e8ab287d9661d80c7359a1550d44b0eb944832e91861fa69169a" Dec 06 06:38:07 crc kubenswrapper[4823]: I1206 06:38:07.277047 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2d4bj" Dec 06 06:38:07 crc kubenswrapper[4823]: I1206 06:38:07.277094 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2d4bj" Dec 06 06:38:08 crc kubenswrapper[4823]: I1206 06:38:08.283058 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerStarted","Data":"2a1cf76af8a6f384ac47680b767c5129bfc1481da61050b03811147d1a619220"} Dec 06 06:38:08 crc kubenswrapper[4823]: I1206 06:38:08.325807 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2d4bj" podUID="b88aff8e-c748-4b1e-bfb1-65efd1c35b7c" containerName="registry-server" probeResult="failure" output=< Dec 06 06:38:08 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Dec 06 06:38:08 crc kubenswrapper[4823]: > Dec 06 06:38:09 crc kubenswrapper[4823]: I1206 06:38:09.291895 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-z7fqk" event={"ID":"e25db4e0-af48-4315-b9f0-ee0a2d774e46","Type":"ContainerStarted","Data":"765b812d22ba3f45a00e039ba0045c1b90de49adcd1a77c5ec588c5b957cee39"} Dec 06 06:38:09 crc kubenswrapper[4823]: I1206 06:38:09.310718 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-z7fqk" podStartSLOduration=1.526652565 podStartE2EDuration="5.310700137s" podCreationTimestamp="2025-12-06 06:38:04 +0000 UTC" firstStartedPulling="2025-12-06 06:38:04.718178844 +0000 UTC m=+786.003930804" lastFinishedPulling="2025-12-06 06:38:08.502226416 +0000 UTC m=+789.787978376" observedRunningTime="2025-12-06 06:38:09.307935117 +0000 UTC m=+790.593687097" watchObservedRunningTime="2025-12-06 06:38:09.310700137 +0000 UTC m=+790.596452117" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.293227 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-5wb79"] Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.294150 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-5wb79" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.300632 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-4r5mg" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.306938 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-5wb79"] Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.321893 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vjn99"] Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.322746 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vjn99" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.329933 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.340342 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-b6h6k"] Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.341378 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-b6h6k" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.352809 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2c4t\" (UniqueName: \"kubernetes.io/projected/efd022a3-2590-4bb1-93ff-b194f6451b5f-kube-api-access-n2c4t\") pod \"nmstate-metrics-7f946cbc9-5wb79\" (UID: \"efd022a3-2590-4bb1-93ff-b194f6451b5f\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-5wb79" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.373199 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vjn99"] Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.448584 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hnxhv"] Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.449470 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hnxhv" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.455271 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.455481 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.455513 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-dmggk" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.456294 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c2ecc92a-539d-4a78-93d0-ca682b8d76a3-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-vjn99\" (UID: \"c2ecc92a-539d-4a78-93d0-ca682b8d76a3\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vjn99" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.456345 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/bf97aa79-e20e-4304-84e5-abd78e1de48a-nmstate-lock\") pod \"nmstate-handler-b6h6k\" (UID: \"bf97aa79-e20e-4304-84e5-abd78e1de48a\") " pod="openshift-nmstate/nmstate-handler-b6h6k" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.456381 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2c4t\" (UniqueName: \"kubernetes.io/projected/efd022a3-2590-4bb1-93ff-b194f6451b5f-kube-api-access-n2c4t\") pod \"nmstate-metrics-7f946cbc9-5wb79\" (UID: \"efd022a3-2590-4bb1-93ff-b194f6451b5f\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-5wb79" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.456402 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn95g\" (UniqueName: \"kubernetes.io/projected/c2ecc92a-539d-4a78-93d0-ca682b8d76a3-kube-api-access-mn95g\") pod \"nmstate-webhook-5f6d4c5ccb-vjn99\" (UID: \"c2ecc92a-539d-4a78-93d0-ca682b8d76a3\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vjn99" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.456431 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj4vh\" (UniqueName: \"kubernetes.io/projected/bf97aa79-e20e-4304-84e5-abd78e1de48a-kube-api-access-vj4vh\") pod \"nmstate-handler-b6h6k\" (UID: \"bf97aa79-e20e-4304-84e5-abd78e1de48a\") " pod="openshift-nmstate/nmstate-handler-b6h6k" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.456455 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/bf97aa79-e20e-4304-84e5-abd78e1de48a-ovs-socket\") pod \"nmstate-handler-b6h6k\" (UID: \"bf97aa79-e20e-4304-84e5-abd78e1de48a\") " pod="openshift-nmstate/nmstate-handler-b6h6k" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.456498 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/bf97aa79-e20e-4304-84e5-abd78e1de48a-dbus-socket\") pod \"nmstate-handler-b6h6k\" (UID: \"bf97aa79-e20e-4304-84e5-abd78e1de48a\") " pod="openshift-nmstate/nmstate-handler-b6h6k" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.469195 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hnxhv"] Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.511942 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2c4t\" (UniqueName: \"kubernetes.io/projected/efd022a3-2590-4bb1-93ff-b194f6451b5f-kube-api-access-n2c4t\") pod \"nmstate-metrics-7f946cbc9-5wb79\" (UID: \"efd022a3-2590-4bb1-93ff-b194f6451b5f\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-5wb79" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.567280 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e25ab073-dc82-4437-b4b4-e74f7a063f35-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-hnxhv\" (UID: \"e25ab073-dc82-4437-b4b4-e74f7a063f35\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hnxhv" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.567388 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn95g\" (UniqueName: \"kubernetes.io/projected/c2ecc92a-539d-4a78-93d0-ca682b8d76a3-kube-api-access-mn95g\") pod \"nmstate-webhook-5f6d4c5ccb-vjn99\" (UID: \"c2ecc92a-539d-4a78-93d0-ca682b8d76a3\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vjn99" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.567423 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj4vh\" (UniqueName: \"kubernetes.io/projected/bf97aa79-e20e-4304-84e5-abd78e1de48a-kube-api-access-vj4vh\") pod \"nmstate-handler-b6h6k\" (UID: \"bf97aa79-e20e-4304-84e5-abd78e1de48a\") " pod="openshift-nmstate/nmstate-handler-b6h6k" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.567451 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/bf97aa79-e20e-4304-84e5-abd78e1de48a-ovs-socket\") pod \"nmstate-handler-b6h6k\" (UID: \"bf97aa79-e20e-4304-84e5-abd78e1de48a\") " pod="openshift-nmstate/nmstate-handler-b6h6k" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.567490 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b72kd\" (UniqueName: \"kubernetes.io/projected/e25ab073-dc82-4437-b4b4-e74f7a063f35-kube-api-access-b72kd\") pod \"nmstate-console-plugin-7fbb5f6569-hnxhv\" (UID: \"e25ab073-dc82-4437-b4b4-e74f7a063f35\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hnxhv" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.567525 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/bf97aa79-e20e-4304-84e5-abd78e1de48a-dbus-socket\") pod \"nmstate-handler-b6h6k\" (UID: \"bf97aa79-e20e-4304-84e5-abd78e1de48a\") " pod="openshift-nmstate/nmstate-handler-b6h6k" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.567554 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c2ecc92a-539d-4a78-93d0-ca682b8d76a3-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-vjn99\" (UID: \"c2ecc92a-539d-4a78-93d0-ca682b8d76a3\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vjn99" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.567572 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e25ab073-dc82-4437-b4b4-e74f7a063f35-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-hnxhv\" (UID: \"e25ab073-dc82-4437-b4b4-e74f7a063f35\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hnxhv" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.567594 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/bf97aa79-e20e-4304-84e5-abd78e1de48a-nmstate-lock\") pod \"nmstate-handler-b6h6k\" (UID: \"bf97aa79-e20e-4304-84e5-abd78e1de48a\") " pod="openshift-nmstate/nmstate-handler-b6h6k" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.567708 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/bf97aa79-e20e-4304-84e5-abd78e1de48a-nmstate-lock\") pod \"nmstate-handler-b6h6k\" (UID: \"bf97aa79-e20e-4304-84e5-abd78e1de48a\") " pod="openshift-nmstate/nmstate-handler-b6h6k" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.568393 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/bf97aa79-e20e-4304-84e5-abd78e1de48a-ovs-socket\") pod \"nmstate-handler-b6h6k\" (UID: \"bf97aa79-e20e-4304-84e5-abd78e1de48a\") " pod="openshift-nmstate/nmstate-handler-b6h6k" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.568842 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/bf97aa79-e20e-4304-84e5-abd78e1de48a-dbus-socket\") pod \"nmstate-handler-b6h6k\" (UID: \"bf97aa79-e20e-4304-84e5-abd78e1de48a\") " pod="openshift-nmstate/nmstate-handler-b6h6k" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.576490 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c2ecc92a-539d-4a78-93d0-ca682b8d76a3-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-vjn99\" (UID: \"c2ecc92a-539d-4a78-93d0-ca682b8d76a3\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vjn99" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.592465 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn95g\" (UniqueName: \"kubernetes.io/projected/c2ecc92a-539d-4a78-93d0-ca682b8d76a3-kube-api-access-mn95g\") pod \"nmstate-webhook-5f6d4c5ccb-vjn99\" (UID: \"c2ecc92a-539d-4a78-93d0-ca682b8d76a3\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vjn99" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.596227 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj4vh\" (UniqueName: \"kubernetes.io/projected/bf97aa79-e20e-4304-84e5-abd78e1de48a-kube-api-access-vj4vh\") pod \"nmstate-handler-b6h6k\" (UID: \"bf97aa79-e20e-4304-84e5-abd78e1de48a\") " pod="openshift-nmstate/nmstate-handler-b6h6k" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.612981 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-5wb79" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.637956 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vjn99" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.661401 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-b6h6k" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.668749 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e25ab073-dc82-4437-b4b4-e74f7a063f35-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-hnxhv\" (UID: \"e25ab073-dc82-4437-b4b4-e74f7a063f35\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hnxhv" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.668836 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b72kd\" (UniqueName: \"kubernetes.io/projected/e25ab073-dc82-4437-b4b4-e74f7a063f35-kube-api-access-b72kd\") pod \"nmstate-console-plugin-7fbb5f6569-hnxhv\" (UID: \"e25ab073-dc82-4437-b4b4-e74f7a063f35\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hnxhv" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.668877 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e25ab073-dc82-4437-b4b4-e74f7a063f35-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-hnxhv\" (UID: \"e25ab073-dc82-4437-b4b4-e74f7a063f35\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hnxhv" Dec 06 06:38:10 crc kubenswrapper[4823]: E1206 06:38:10.668999 4823 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 06 06:38:10 crc kubenswrapper[4823]: E1206 06:38:10.669051 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25ab073-dc82-4437-b4b4-e74f7a063f35-plugin-serving-cert podName:e25ab073-dc82-4437-b4b4-e74f7a063f35 nodeName:}" failed. No retries permitted until 2025-12-06 06:38:11.16903181 +0000 UTC m=+792.454783770 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/e25ab073-dc82-4437-b4b4-e74f7a063f35-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-hnxhv" (UID: "e25ab073-dc82-4437-b4b4-e74f7a063f35") : secret "plugin-serving-cert" not found Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.669833 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e25ab073-dc82-4437-b4b4-e74f7a063f35-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-hnxhv\" (UID: \"e25ab073-dc82-4437-b4b4-e74f7a063f35\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hnxhv" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.689465 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b72kd\" (UniqueName: \"kubernetes.io/projected/e25ab073-dc82-4437-b4b4-e74f7a063f35-kube-api-access-b72kd\") pod \"nmstate-console-plugin-7fbb5f6569-hnxhv\" (UID: \"e25ab073-dc82-4437-b4b4-e74f7a063f35\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hnxhv" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.722722 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-768ff484c9-gbdtl"] Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.723810 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.725161 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-768ff484c9-gbdtl"] Dec 06 06:38:10 crc kubenswrapper[4823]: W1206 06:38:10.741378 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf97aa79_e20e_4304_84e5_abd78e1de48a.slice/crio-2ad7ccdf745b5d3b1a05888e38335cae939f9fb9dea1bd872f3da6d25000b822 WatchSource:0}: Error finding container 2ad7ccdf745b5d3b1a05888e38335cae939f9fb9dea1bd872f3da6d25000b822: Status 404 returned error can't find the container with id 2ad7ccdf745b5d3b1a05888e38335cae939f9fb9dea1bd872f3da6d25000b822 Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.873612 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp77v\" (UniqueName: \"kubernetes.io/projected/de681a3a-fcdd-430d-9c8b-b8a4dac41011-kube-api-access-wp77v\") pod \"console-768ff484c9-gbdtl\" (UID: \"de681a3a-fcdd-430d-9c8b-b8a4dac41011\") " pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.874255 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de681a3a-fcdd-430d-9c8b-b8a4dac41011-console-serving-cert\") pod \"console-768ff484c9-gbdtl\" (UID: \"de681a3a-fcdd-430d-9c8b-b8a4dac41011\") " pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.874287 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de681a3a-fcdd-430d-9c8b-b8a4dac41011-console-config\") pod \"console-768ff484c9-gbdtl\" (UID: \"de681a3a-fcdd-430d-9c8b-b8a4dac41011\") " pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.874307 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de681a3a-fcdd-430d-9c8b-b8a4dac41011-trusted-ca-bundle\") pod \"console-768ff484c9-gbdtl\" (UID: \"de681a3a-fcdd-430d-9c8b-b8a4dac41011\") " pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.874395 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de681a3a-fcdd-430d-9c8b-b8a4dac41011-console-oauth-config\") pod \"console-768ff484c9-gbdtl\" (UID: \"de681a3a-fcdd-430d-9c8b-b8a4dac41011\") " pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.874438 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de681a3a-fcdd-430d-9c8b-b8a4dac41011-oauth-serving-cert\") pod \"console-768ff484c9-gbdtl\" (UID: \"de681a3a-fcdd-430d-9c8b-b8a4dac41011\") " pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.874462 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de681a3a-fcdd-430d-9c8b-b8a4dac41011-service-ca\") pod \"console-768ff484c9-gbdtl\" (UID: \"de681a3a-fcdd-430d-9c8b-b8a4dac41011\") " pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.928963 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-5wb79"] Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.976234 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de681a3a-fcdd-430d-9c8b-b8a4dac41011-console-serving-cert\") pod \"console-768ff484c9-gbdtl\" (UID: \"de681a3a-fcdd-430d-9c8b-b8a4dac41011\") " pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.976308 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de681a3a-fcdd-430d-9c8b-b8a4dac41011-console-config\") pod \"console-768ff484c9-gbdtl\" (UID: \"de681a3a-fcdd-430d-9c8b-b8a4dac41011\") " pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.976338 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de681a3a-fcdd-430d-9c8b-b8a4dac41011-trusted-ca-bundle\") pod \"console-768ff484c9-gbdtl\" (UID: \"de681a3a-fcdd-430d-9c8b-b8a4dac41011\") " pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.976422 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de681a3a-fcdd-430d-9c8b-b8a4dac41011-console-oauth-config\") pod \"console-768ff484c9-gbdtl\" (UID: \"de681a3a-fcdd-430d-9c8b-b8a4dac41011\") " pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.976479 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de681a3a-fcdd-430d-9c8b-b8a4dac41011-oauth-serving-cert\") pod \"console-768ff484c9-gbdtl\" (UID: \"de681a3a-fcdd-430d-9c8b-b8a4dac41011\") " pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.976528 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de681a3a-fcdd-430d-9c8b-b8a4dac41011-service-ca\") pod \"console-768ff484c9-gbdtl\" (UID: \"de681a3a-fcdd-430d-9c8b-b8a4dac41011\") " pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.976582 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp77v\" (UniqueName: \"kubernetes.io/projected/de681a3a-fcdd-430d-9c8b-b8a4dac41011-kube-api-access-wp77v\") pod \"console-768ff484c9-gbdtl\" (UID: \"de681a3a-fcdd-430d-9c8b-b8a4dac41011\") " pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.978038 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de681a3a-fcdd-430d-9c8b-b8a4dac41011-service-ca\") pod \"console-768ff484c9-gbdtl\" (UID: \"de681a3a-fcdd-430d-9c8b-b8a4dac41011\") " pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.978235 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de681a3a-fcdd-430d-9c8b-b8a4dac41011-oauth-serving-cert\") pod \"console-768ff484c9-gbdtl\" (UID: \"de681a3a-fcdd-430d-9c8b-b8a4dac41011\") " pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.979225 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de681a3a-fcdd-430d-9c8b-b8a4dac41011-trusted-ca-bundle\") pod \"console-768ff484c9-gbdtl\" (UID: \"de681a3a-fcdd-430d-9c8b-b8a4dac41011\") " pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.981282 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de681a3a-fcdd-430d-9c8b-b8a4dac41011-console-config\") pod \"console-768ff484c9-gbdtl\" (UID: \"de681a3a-fcdd-430d-9c8b-b8a4dac41011\") " pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.989407 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de681a3a-fcdd-430d-9c8b-b8a4dac41011-console-oauth-config\") pod \"console-768ff484c9-gbdtl\" (UID: \"de681a3a-fcdd-430d-9c8b-b8a4dac41011\") " pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.989582 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de681a3a-fcdd-430d-9c8b-b8a4dac41011-console-serving-cert\") pod \"console-768ff484c9-gbdtl\" (UID: \"de681a3a-fcdd-430d-9c8b-b8a4dac41011\") " pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:10 crc kubenswrapper[4823]: I1206 06:38:10.995945 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp77v\" (UniqueName: \"kubernetes.io/projected/de681a3a-fcdd-430d-9c8b-b8a4dac41011-kube-api-access-wp77v\") pod \"console-768ff484c9-gbdtl\" (UID: \"de681a3a-fcdd-430d-9c8b-b8a4dac41011\") " pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:11 crc kubenswrapper[4823]: I1206 06:38:11.029192 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vjn99"] Dec 06 06:38:11 crc kubenswrapper[4823]: I1206 06:38:11.063630 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:11 crc kubenswrapper[4823]: I1206 06:38:11.179576 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e25ab073-dc82-4437-b4b4-e74f7a063f35-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-hnxhv\" (UID: \"e25ab073-dc82-4437-b4b4-e74f7a063f35\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hnxhv" Dec 06 06:38:11 crc kubenswrapper[4823]: I1206 06:38:11.185389 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e25ab073-dc82-4437-b4b4-e74f7a063f35-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-hnxhv\" (UID: \"e25ab073-dc82-4437-b4b4-e74f7a063f35\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hnxhv" Dec 06 06:38:11 crc kubenswrapper[4823]: I1206 06:38:11.274331 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-768ff484c9-gbdtl"] Dec 06 06:38:11 crc kubenswrapper[4823]: W1206 06:38:11.284502 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde681a3a_fcdd_430d_9c8b_b8a4dac41011.slice/crio-cab940ea781a8262380abbff42c90e905e55571ce904ac5ba010579225783a9d WatchSource:0}: Error finding container cab940ea781a8262380abbff42c90e905e55571ce904ac5ba010579225783a9d: Status 404 returned error can't find the container with id cab940ea781a8262380abbff42c90e905e55571ce904ac5ba010579225783a9d Dec 06 06:38:11 crc kubenswrapper[4823]: I1206 06:38:11.308699 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vjn99" event={"ID":"c2ecc92a-539d-4a78-93d0-ca682b8d76a3","Type":"ContainerStarted","Data":"b96317627232551ac8c8b79adbdd3408d95b48e67dee46ac5df4c6ec82656826"} Dec 06 06:38:11 crc kubenswrapper[4823]: I1206 06:38:11.309725 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-5wb79" event={"ID":"efd022a3-2590-4bb1-93ff-b194f6451b5f","Type":"ContainerStarted","Data":"d1fe31b0611037b5ecbbe896c23ca3d2e7a7ba338fbaeb013c889a9268cae1a1"} Dec 06 06:38:11 crc kubenswrapper[4823]: I1206 06:38:11.310738 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-768ff484c9-gbdtl" event={"ID":"de681a3a-fcdd-430d-9c8b-b8a4dac41011","Type":"ContainerStarted","Data":"cab940ea781a8262380abbff42c90e905e55571ce904ac5ba010579225783a9d"} Dec 06 06:38:11 crc kubenswrapper[4823]: I1206 06:38:11.311599 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-b6h6k" event={"ID":"bf97aa79-e20e-4304-84e5-abd78e1de48a","Type":"ContainerStarted","Data":"2ad7ccdf745b5d3b1a05888e38335cae939f9fb9dea1bd872f3da6d25000b822"} Dec 06 06:38:11 crc kubenswrapper[4823]: I1206 06:38:11.365457 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hnxhv" Dec 06 06:38:11 crc kubenswrapper[4823]: I1206 06:38:11.566360 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hnxhv"] Dec 06 06:38:12 crc kubenswrapper[4823]: I1206 06:38:12.323924 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hnxhv" event={"ID":"e25ab073-dc82-4437-b4b4-e74f7a063f35","Type":"ContainerStarted","Data":"85fd9b7d504b26dccaa4c909dd36de2097c9c166f7ebe2aadafd1a932f256a2d"} Dec 06 06:38:13 crc kubenswrapper[4823]: I1206 06:38:13.332603 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-768ff484c9-gbdtl" event={"ID":"de681a3a-fcdd-430d-9c8b-b8a4dac41011","Type":"ContainerStarted","Data":"07f8cd20092b33a63f0c3afde6495823fb4fe5ce168c6910df2b686540df93f9"} Dec 06 06:38:13 crc kubenswrapper[4823]: I1206 06:38:13.350327 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-768ff484c9-gbdtl" podStartSLOduration=3.350310849 podStartE2EDuration="3.350310849s" podCreationTimestamp="2025-12-06 06:38:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:38:13.349065683 +0000 UTC m=+794.634817653" watchObservedRunningTime="2025-12-06 06:38:13.350310849 +0000 UTC m=+794.636062809" Dec 06 06:38:15 crc kubenswrapper[4823]: I1206 06:38:15.350075 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-b6h6k" event={"ID":"bf97aa79-e20e-4304-84e5-abd78e1de48a","Type":"ContainerStarted","Data":"6dbd393d676bb66a86ebc7a7d6c2282489d3febb21839f0abe34ddc8c8276907"} Dec 06 06:38:15 crc kubenswrapper[4823]: I1206 06:38:15.350856 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-b6h6k" Dec 06 06:38:15 crc kubenswrapper[4823]: I1206 06:38:15.361687 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vjn99" event={"ID":"c2ecc92a-539d-4a78-93d0-ca682b8d76a3","Type":"ContainerStarted","Data":"352692565bcb9a5310272e212b752a15a48ecc717d21c09d4fdc1c5e9c0470b1"} Dec 06 06:38:15 crc kubenswrapper[4823]: I1206 06:38:15.361784 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vjn99" Dec 06 06:38:15 crc kubenswrapper[4823]: I1206 06:38:15.361849 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-5wb79" event={"ID":"efd022a3-2590-4bb1-93ff-b194f6451b5f","Type":"ContainerStarted","Data":"33e534a9e689ef67654f0910209129ba29c059ed15e4f36b72e7afa149b0adcd"} Dec 06 06:38:15 crc kubenswrapper[4823]: I1206 06:38:15.369874 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-b6h6k" podStartSLOduration=1.395430409 podStartE2EDuration="5.369851331s" podCreationTimestamp="2025-12-06 06:38:10 +0000 UTC" firstStartedPulling="2025-12-06 06:38:10.744345773 +0000 UTC m=+792.030097733" lastFinishedPulling="2025-12-06 06:38:14.718766695 +0000 UTC m=+796.004518655" observedRunningTime="2025-12-06 06:38:15.368335567 +0000 UTC m=+796.654087537" watchObservedRunningTime="2025-12-06 06:38:15.369851331 +0000 UTC m=+796.655603291" Dec 06 06:38:15 crc kubenswrapper[4823]: I1206 06:38:15.389080 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vjn99" podStartSLOduration=1.656105237 podStartE2EDuration="5.389058928s" podCreationTimestamp="2025-12-06 06:38:10 +0000 UTC" firstStartedPulling="2025-12-06 06:38:11.039261404 +0000 UTC m=+792.325013364" lastFinishedPulling="2025-12-06 06:38:14.772215085 +0000 UTC m=+796.057967055" observedRunningTime="2025-12-06 06:38:15.382500258 +0000 UTC m=+796.668252218" watchObservedRunningTime="2025-12-06 06:38:15.389058928 +0000 UTC m=+796.674810888" Dec 06 06:38:16 crc kubenswrapper[4823]: I1206 06:38:16.365015 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hnxhv" event={"ID":"e25ab073-dc82-4437-b4b4-e74f7a063f35","Type":"ContainerStarted","Data":"075e055fa17f68170ed337cdc29f58ec90b60a97201cbec6fe6b2e930b53ae88"} Dec 06 06:38:16 crc kubenswrapper[4823]: I1206 06:38:16.380039 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-hnxhv" podStartSLOduration=2.255220909 podStartE2EDuration="6.380015609s" podCreationTimestamp="2025-12-06 06:38:10 +0000 UTC" firstStartedPulling="2025-12-06 06:38:11.576171111 +0000 UTC m=+792.861923071" lastFinishedPulling="2025-12-06 06:38:15.700965811 +0000 UTC m=+796.986717771" observedRunningTime="2025-12-06 06:38:16.379235637 +0000 UTC m=+797.664987597" watchObservedRunningTime="2025-12-06 06:38:16.380015609 +0000 UTC m=+797.665767579" Dec 06 06:38:17 crc kubenswrapper[4823]: I1206 06:38:17.316640 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2d4bj" Dec 06 06:38:17 crc kubenswrapper[4823]: I1206 06:38:17.363194 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2d4bj" Dec 06 06:38:17 crc kubenswrapper[4823]: I1206 06:38:17.373006 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-5wb79" event={"ID":"efd022a3-2590-4bb1-93ff-b194f6451b5f","Type":"ContainerStarted","Data":"8d2ba1be4647e3033a321baf4cbfc340dea8c8e5243aa46e661f46d5e7506e8d"} Dec 06 06:38:17 crc kubenswrapper[4823]: I1206 06:38:17.540911 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-5wb79" podStartSLOduration=1.514973116 podStartE2EDuration="7.540891887s" podCreationTimestamp="2025-12-06 06:38:10 +0000 UTC" firstStartedPulling="2025-12-06 06:38:10.942356714 +0000 UTC m=+792.228108664" lastFinishedPulling="2025-12-06 06:38:16.968275475 +0000 UTC m=+798.254027435" observedRunningTime="2025-12-06 06:38:17.407917762 +0000 UTC m=+798.693669722" watchObservedRunningTime="2025-12-06 06:38:17.540891887 +0000 UTC m=+798.826643847" Dec 06 06:38:17 crc kubenswrapper[4823]: I1206 06:38:17.545350 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2d4bj"] Dec 06 06:38:18 crc kubenswrapper[4823]: I1206 06:38:18.378575 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2d4bj" podUID="b88aff8e-c748-4b1e-bfb1-65efd1c35b7c" containerName="registry-server" containerID="cri-o://623d1dd7f25dd96905d37de847f762357da7fa3e98d075978c96cb78cefaba28" gracePeriod=2 Dec 06 06:38:18 crc kubenswrapper[4823]: I1206 06:38:18.738016 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2d4bj" Dec 06 06:38:18 crc kubenswrapper[4823]: I1206 06:38:18.800927 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88aff8e-c748-4b1e-bfb1-65efd1c35b7c-catalog-content\") pod \"b88aff8e-c748-4b1e-bfb1-65efd1c35b7c\" (UID: \"b88aff8e-c748-4b1e-bfb1-65efd1c35b7c\") " Dec 06 06:38:18 crc kubenswrapper[4823]: I1206 06:38:18.801002 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88aff8e-c748-4b1e-bfb1-65efd1c35b7c-utilities\") pod \"b88aff8e-c748-4b1e-bfb1-65efd1c35b7c\" (UID: \"b88aff8e-c748-4b1e-bfb1-65efd1c35b7c\") " Dec 06 06:38:18 crc kubenswrapper[4823]: I1206 06:38:18.801066 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l29xr\" (UniqueName: \"kubernetes.io/projected/b88aff8e-c748-4b1e-bfb1-65efd1c35b7c-kube-api-access-l29xr\") pod \"b88aff8e-c748-4b1e-bfb1-65efd1c35b7c\" (UID: \"b88aff8e-c748-4b1e-bfb1-65efd1c35b7c\") " Dec 06 06:38:18 crc kubenswrapper[4823]: I1206 06:38:18.802862 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b88aff8e-c748-4b1e-bfb1-65efd1c35b7c-utilities" (OuterVolumeSpecName: "utilities") pod "b88aff8e-c748-4b1e-bfb1-65efd1c35b7c" (UID: "b88aff8e-c748-4b1e-bfb1-65efd1c35b7c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:38:18 crc kubenswrapper[4823]: I1206 06:38:18.808265 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b88aff8e-c748-4b1e-bfb1-65efd1c35b7c-kube-api-access-l29xr" (OuterVolumeSpecName: "kube-api-access-l29xr") pod "b88aff8e-c748-4b1e-bfb1-65efd1c35b7c" (UID: "b88aff8e-c748-4b1e-bfb1-65efd1c35b7c"). InnerVolumeSpecName "kube-api-access-l29xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:38:18 crc kubenswrapper[4823]: I1206 06:38:18.902637 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88aff8e-c748-4b1e-bfb1-65efd1c35b7c-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:38:18 crc kubenswrapper[4823]: I1206 06:38:18.902699 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l29xr\" (UniqueName: \"kubernetes.io/projected/b88aff8e-c748-4b1e-bfb1-65efd1c35b7c-kube-api-access-l29xr\") on node \"crc\" DevicePath \"\"" Dec 06 06:38:18 crc kubenswrapper[4823]: I1206 06:38:18.919225 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b88aff8e-c748-4b1e-bfb1-65efd1c35b7c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b88aff8e-c748-4b1e-bfb1-65efd1c35b7c" (UID: "b88aff8e-c748-4b1e-bfb1-65efd1c35b7c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:38:19 crc kubenswrapper[4823]: I1206 06:38:19.003718 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88aff8e-c748-4b1e-bfb1-65efd1c35b7c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:38:19 crc kubenswrapper[4823]: I1206 06:38:19.387191 4823 generic.go:334] "Generic (PLEG): container finished" podID="b88aff8e-c748-4b1e-bfb1-65efd1c35b7c" containerID="623d1dd7f25dd96905d37de847f762357da7fa3e98d075978c96cb78cefaba28" exitCode=0 Dec 06 06:38:19 crc kubenswrapper[4823]: I1206 06:38:19.387242 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2d4bj" event={"ID":"b88aff8e-c748-4b1e-bfb1-65efd1c35b7c","Type":"ContainerDied","Data":"623d1dd7f25dd96905d37de847f762357da7fa3e98d075978c96cb78cefaba28"} Dec 06 06:38:19 crc kubenswrapper[4823]: I1206 06:38:19.387274 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2d4bj" event={"ID":"b88aff8e-c748-4b1e-bfb1-65efd1c35b7c","Type":"ContainerDied","Data":"e36f942879bbbd8e7138658d3a5cdc50323f2e0d4a4a5023e99446ee845e7c04"} Dec 06 06:38:19 crc kubenswrapper[4823]: I1206 06:38:19.387293 4823 scope.go:117] "RemoveContainer" containerID="623d1dd7f25dd96905d37de847f762357da7fa3e98d075978c96cb78cefaba28" Dec 06 06:38:19 crc kubenswrapper[4823]: I1206 06:38:19.387894 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2d4bj" Dec 06 06:38:19 crc kubenswrapper[4823]: I1206 06:38:19.406088 4823 scope.go:117] "RemoveContainer" containerID="b5ae761145309460ac05913b4d653f07daa336d3f2df8bbf8fa84de835e8c690" Dec 06 06:38:19 crc kubenswrapper[4823]: I1206 06:38:19.412057 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2d4bj"] Dec 06 06:38:19 crc kubenswrapper[4823]: I1206 06:38:19.418939 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2d4bj"] Dec 06 06:38:19 crc kubenswrapper[4823]: I1206 06:38:19.427379 4823 scope.go:117] "RemoveContainer" containerID="4b07551bf0039dd2f3a8d7be4e93e932b7a0248fbf729eb698633aa29818891f" Dec 06 06:38:19 crc kubenswrapper[4823]: I1206 06:38:19.446406 4823 scope.go:117] "RemoveContainer" containerID="623d1dd7f25dd96905d37de847f762357da7fa3e98d075978c96cb78cefaba28" Dec 06 06:38:19 crc kubenswrapper[4823]: E1206 06:38:19.446834 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"623d1dd7f25dd96905d37de847f762357da7fa3e98d075978c96cb78cefaba28\": container with ID starting with 623d1dd7f25dd96905d37de847f762357da7fa3e98d075978c96cb78cefaba28 not found: ID does not exist" containerID="623d1dd7f25dd96905d37de847f762357da7fa3e98d075978c96cb78cefaba28" Dec 06 06:38:19 crc kubenswrapper[4823]: I1206 06:38:19.446877 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"623d1dd7f25dd96905d37de847f762357da7fa3e98d075978c96cb78cefaba28"} err="failed to get container status \"623d1dd7f25dd96905d37de847f762357da7fa3e98d075978c96cb78cefaba28\": rpc error: code = NotFound desc = could not find container \"623d1dd7f25dd96905d37de847f762357da7fa3e98d075978c96cb78cefaba28\": container with ID starting with 623d1dd7f25dd96905d37de847f762357da7fa3e98d075978c96cb78cefaba28 not found: ID does not exist" Dec 06 06:38:19 crc kubenswrapper[4823]: I1206 06:38:19.446907 4823 scope.go:117] "RemoveContainer" containerID="b5ae761145309460ac05913b4d653f07daa336d3f2df8bbf8fa84de835e8c690" Dec 06 06:38:19 crc kubenswrapper[4823]: E1206 06:38:19.447356 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5ae761145309460ac05913b4d653f07daa336d3f2df8bbf8fa84de835e8c690\": container with ID starting with b5ae761145309460ac05913b4d653f07daa336d3f2df8bbf8fa84de835e8c690 not found: ID does not exist" containerID="b5ae761145309460ac05913b4d653f07daa336d3f2df8bbf8fa84de835e8c690" Dec 06 06:38:19 crc kubenswrapper[4823]: I1206 06:38:19.447386 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5ae761145309460ac05913b4d653f07daa336d3f2df8bbf8fa84de835e8c690"} err="failed to get container status \"b5ae761145309460ac05913b4d653f07daa336d3f2df8bbf8fa84de835e8c690\": rpc error: code = NotFound desc = could not find container \"b5ae761145309460ac05913b4d653f07daa336d3f2df8bbf8fa84de835e8c690\": container with ID starting with b5ae761145309460ac05913b4d653f07daa336d3f2df8bbf8fa84de835e8c690 not found: ID does not exist" Dec 06 06:38:19 crc kubenswrapper[4823]: I1206 06:38:19.447402 4823 scope.go:117] "RemoveContainer" containerID="4b07551bf0039dd2f3a8d7be4e93e932b7a0248fbf729eb698633aa29818891f" Dec 06 06:38:19 crc kubenswrapper[4823]: E1206 06:38:19.447859 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b07551bf0039dd2f3a8d7be4e93e932b7a0248fbf729eb698633aa29818891f\": container with ID starting with 4b07551bf0039dd2f3a8d7be4e93e932b7a0248fbf729eb698633aa29818891f not found: ID does not exist" containerID="4b07551bf0039dd2f3a8d7be4e93e932b7a0248fbf729eb698633aa29818891f" Dec 06 06:38:19 crc kubenswrapper[4823]: I1206 06:38:19.447899 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b07551bf0039dd2f3a8d7be4e93e932b7a0248fbf729eb698633aa29818891f"} err="failed to get container status \"4b07551bf0039dd2f3a8d7be4e93e932b7a0248fbf729eb698633aa29818891f\": rpc error: code = NotFound desc = could not find container \"4b07551bf0039dd2f3a8d7be4e93e932b7a0248fbf729eb698633aa29818891f\": container with ID starting with 4b07551bf0039dd2f3a8d7be4e93e932b7a0248fbf729eb698633aa29818891f not found: ID does not exist" Dec 06 06:38:20 crc kubenswrapper[4823]: I1206 06:38:20.686976 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-b6h6k" Dec 06 06:38:21 crc kubenswrapper[4823]: I1206 06:38:21.064285 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:21 crc kubenswrapper[4823]: I1206 06:38:21.064357 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:21 crc kubenswrapper[4823]: I1206 06:38:21.069624 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:21 crc kubenswrapper[4823]: I1206 06:38:21.148078 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b88aff8e-c748-4b1e-bfb1-65efd1c35b7c" path="/var/lib/kubelet/pods/b88aff8e-c748-4b1e-bfb1-65efd1c35b7c/volumes" Dec 06 06:38:21 crc kubenswrapper[4823]: I1206 06:38:21.403243 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-768ff484c9-gbdtl" Dec 06 06:38:21 crc kubenswrapper[4823]: I1206 06:38:21.454438 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-wzsch"] Dec 06 06:38:30 crc kubenswrapper[4823]: I1206 06:38:30.644729 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vjn99" Dec 06 06:38:43 crc kubenswrapper[4823]: I1206 06:38:43.670078 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft"] Dec 06 06:38:43 crc kubenswrapper[4823]: E1206 06:38:43.671865 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b88aff8e-c748-4b1e-bfb1-65efd1c35b7c" containerName="registry-server" Dec 06 06:38:43 crc kubenswrapper[4823]: I1206 06:38:43.671904 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88aff8e-c748-4b1e-bfb1-65efd1c35b7c" containerName="registry-server" Dec 06 06:38:43 crc kubenswrapper[4823]: E1206 06:38:43.671919 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b88aff8e-c748-4b1e-bfb1-65efd1c35b7c" containerName="extract-utilities" Dec 06 06:38:43 crc kubenswrapper[4823]: I1206 06:38:43.671926 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88aff8e-c748-4b1e-bfb1-65efd1c35b7c" containerName="extract-utilities" Dec 06 06:38:43 crc kubenswrapper[4823]: E1206 06:38:43.671939 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b88aff8e-c748-4b1e-bfb1-65efd1c35b7c" containerName="extract-content" Dec 06 06:38:43 crc kubenswrapper[4823]: I1206 06:38:43.671946 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88aff8e-c748-4b1e-bfb1-65efd1c35b7c" containerName="extract-content" Dec 06 06:38:43 crc kubenswrapper[4823]: I1206 06:38:43.672110 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b88aff8e-c748-4b1e-bfb1-65efd1c35b7c" containerName="registry-server" Dec 06 06:38:43 crc kubenswrapper[4823]: I1206 06:38:43.673283 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft" Dec 06 06:38:43 crc kubenswrapper[4823]: I1206 06:38:43.676284 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 06 06:38:43 crc kubenswrapper[4823]: I1206 06:38:43.687454 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft"] Dec 06 06:38:43 crc kubenswrapper[4823]: I1206 06:38:43.776448 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft\" (UID: \"5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft" Dec 06 06:38:43 crc kubenswrapper[4823]: I1206 06:38:43.776528 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq84p\" (UniqueName: \"kubernetes.io/projected/5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901-kube-api-access-lq84p\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft\" (UID: \"5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft" Dec 06 06:38:43 crc kubenswrapper[4823]: I1206 06:38:43.776575 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft\" (UID: \"5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft" Dec 06 06:38:43 crc kubenswrapper[4823]: I1206 06:38:43.878283 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft\" (UID: \"5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft" Dec 06 06:38:43 crc kubenswrapper[4823]: I1206 06:38:43.878336 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq84p\" (UniqueName: \"kubernetes.io/projected/5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901-kube-api-access-lq84p\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft\" (UID: \"5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft" Dec 06 06:38:43 crc kubenswrapper[4823]: I1206 06:38:43.878383 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft\" (UID: \"5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft" Dec 06 06:38:43 crc kubenswrapper[4823]: I1206 06:38:43.878891 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft\" (UID: \"5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft" Dec 06 06:38:43 crc kubenswrapper[4823]: I1206 06:38:43.878931 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft\" (UID: \"5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft" Dec 06 06:38:43 crc kubenswrapper[4823]: I1206 06:38:43.897621 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq84p\" (UniqueName: \"kubernetes.io/projected/5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901-kube-api-access-lq84p\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft\" (UID: \"5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft" Dec 06 06:38:43 crc kubenswrapper[4823]: I1206 06:38:43.995055 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft" Dec 06 06:38:44 crc kubenswrapper[4823]: I1206 06:38:44.396975 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft"] Dec 06 06:38:44 crc kubenswrapper[4823]: W1206 06:38:44.400169 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f6e91ee_2aa4_4411_9eaa_eaa5f85c2901.slice/crio-36fea85f5ddda776e658a827407dd26765cc04e0b462d4caf3f7ccc582cd8c37 WatchSource:0}: Error finding container 36fea85f5ddda776e658a827407dd26765cc04e0b462d4caf3f7ccc582cd8c37: Status 404 returned error can't find the container with id 36fea85f5ddda776e658a827407dd26765cc04e0b462d4caf3f7ccc582cd8c37 Dec 06 06:38:44 crc kubenswrapper[4823]: I1206 06:38:44.542555 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft" event={"ID":"5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901","Type":"ContainerStarted","Data":"36fea85f5ddda776e658a827407dd26765cc04e0b462d4caf3f7ccc582cd8c37"} Dec 06 06:38:45 crc kubenswrapper[4823]: I1206 06:38:45.549293 4823 generic.go:334] "Generic (PLEG): container finished" podID="5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901" containerID="72f7d4bf286e94ab8068ac495d85c3b70d3fa305d487d6f7a30dada6a95a2e78" exitCode=0 Dec 06 06:38:45 crc kubenswrapper[4823]: I1206 06:38:45.549351 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft" event={"ID":"5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901","Type":"ContainerDied","Data":"72f7d4bf286e94ab8068ac495d85c3b70d3fa305d487d6f7a30dada6a95a2e78"} Dec 06 06:38:46 crc kubenswrapper[4823]: I1206 06:38:46.495461 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-wzsch" podUID="e802aa0a-cd13-43df-be69-40b0bca7200f" containerName="console" containerID="cri-o://d53ba278867c2d1dc56b9f03fb96868d460414484050feb3a27fef6d628fa457" gracePeriod=15 Dec 06 06:38:46 crc kubenswrapper[4823]: I1206 06:38:46.905202 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-wzsch_e802aa0a-cd13-43df-be69-40b0bca7200f/console/0.log" Dec 06 06:38:46 crc kubenswrapper[4823]: I1206 06:38:46.905383 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.027722 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e802aa0a-cd13-43df-be69-40b0bca7200f-console-serving-cert\") pod \"e802aa0a-cd13-43df-be69-40b0bca7200f\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.027769 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e802aa0a-cd13-43df-be69-40b0bca7200f-service-ca\") pod \"e802aa0a-cd13-43df-be69-40b0bca7200f\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.027817 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wlmt\" (UniqueName: \"kubernetes.io/projected/e802aa0a-cd13-43df-be69-40b0bca7200f-kube-api-access-8wlmt\") pod \"e802aa0a-cd13-43df-be69-40b0bca7200f\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.027857 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e802aa0a-cd13-43df-be69-40b0bca7200f-console-config\") pod \"e802aa0a-cd13-43df-be69-40b0bca7200f\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.027891 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e802aa0a-cd13-43df-be69-40b0bca7200f-oauth-serving-cert\") pod \"e802aa0a-cd13-43df-be69-40b0bca7200f\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.027912 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e802aa0a-cd13-43df-be69-40b0bca7200f-trusted-ca-bundle\") pod \"e802aa0a-cd13-43df-be69-40b0bca7200f\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.028575 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e802aa0a-cd13-43df-be69-40b0bca7200f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e802aa0a-cd13-43df-be69-40b0bca7200f" (UID: "e802aa0a-cd13-43df-be69-40b0bca7200f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.028603 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e802aa0a-cd13-43df-be69-40b0bca7200f-service-ca" (OuterVolumeSpecName: "service-ca") pod "e802aa0a-cd13-43df-be69-40b0bca7200f" (UID: "e802aa0a-cd13-43df-be69-40b0bca7200f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.028635 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e802aa0a-cd13-43df-be69-40b0bca7200f-console-oauth-config\") pod \"e802aa0a-cd13-43df-be69-40b0bca7200f\" (UID: \"e802aa0a-cd13-43df-be69-40b0bca7200f\") " Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.028648 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e802aa0a-cd13-43df-be69-40b0bca7200f-console-config" (OuterVolumeSpecName: "console-config") pod "e802aa0a-cd13-43df-be69-40b0bca7200f" (UID: "e802aa0a-cd13-43df-be69-40b0bca7200f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.028707 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e802aa0a-cd13-43df-be69-40b0bca7200f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e802aa0a-cd13-43df-be69-40b0bca7200f" (UID: "e802aa0a-cd13-43df-be69-40b0bca7200f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.028912 4823 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e802aa0a-cd13-43df-be69-40b0bca7200f-service-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.028935 4823 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e802aa0a-cd13-43df-be69-40b0bca7200f-console-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.028946 4823 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e802aa0a-cd13-43df-be69-40b0bca7200f-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.028957 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e802aa0a-cd13-43df-be69-40b0bca7200f-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.035418 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e802aa0a-cd13-43df-be69-40b0bca7200f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e802aa0a-cd13-43df-be69-40b0bca7200f" (UID: "e802aa0a-cd13-43df-be69-40b0bca7200f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.036183 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e802aa0a-cd13-43df-be69-40b0bca7200f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e802aa0a-cd13-43df-be69-40b0bca7200f" (UID: "e802aa0a-cd13-43df-be69-40b0bca7200f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.036359 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e802aa0a-cd13-43df-be69-40b0bca7200f-kube-api-access-8wlmt" (OuterVolumeSpecName: "kube-api-access-8wlmt") pod "e802aa0a-cd13-43df-be69-40b0bca7200f" (UID: "e802aa0a-cd13-43df-be69-40b0bca7200f"). InnerVolumeSpecName "kube-api-access-8wlmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.130436 4823 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e802aa0a-cd13-43df-be69-40b0bca7200f-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.130473 4823 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e802aa0a-cd13-43df-be69-40b0bca7200f-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.130485 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wlmt\" (UniqueName: \"kubernetes.io/projected/e802aa0a-cd13-43df-be69-40b0bca7200f-kube-api-access-8wlmt\") on node \"crc\" DevicePath \"\"" Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.567756 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-wzsch_e802aa0a-cd13-43df-be69-40b0bca7200f/console/0.log" Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.567821 4823 generic.go:334] "Generic (PLEG): container finished" podID="e802aa0a-cd13-43df-be69-40b0bca7200f" containerID="d53ba278867c2d1dc56b9f03fb96868d460414484050feb3a27fef6d628fa457" exitCode=2 Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.567900 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wzsch" event={"ID":"e802aa0a-cd13-43df-be69-40b0bca7200f","Type":"ContainerDied","Data":"d53ba278867c2d1dc56b9f03fb96868d460414484050feb3a27fef6d628fa457"} Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.567897 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wzsch" Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.567936 4823 scope.go:117] "RemoveContainer" containerID="d53ba278867c2d1dc56b9f03fb96868d460414484050feb3a27fef6d628fa457" Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.567926 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wzsch" event={"ID":"e802aa0a-cd13-43df-be69-40b0bca7200f","Type":"ContainerDied","Data":"1d87b4ed24bc0cd85b3c61eca48f485fbdb095433efed6a335a8f4c27aad7936"} Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.570092 4823 generic.go:334] "Generic (PLEG): container finished" podID="5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901" containerID="7261fed909043124fc6b648ed89755618267753c8d525cc80164e257e8c0fdb1" exitCode=0 Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.570143 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft" event={"ID":"5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901","Type":"ContainerDied","Data":"7261fed909043124fc6b648ed89755618267753c8d525cc80164e257e8c0fdb1"} Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.597650 4823 scope.go:117] "RemoveContainer" containerID="d53ba278867c2d1dc56b9f03fb96868d460414484050feb3a27fef6d628fa457" Dec 06 06:38:47 crc kubenswrapper[4823]: E1206 06:38:47.598054 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d53ba278867c2d1dc56b9f03fb96868d460414484050feb3a27fef6d628fa457\": container with ID starting with d53ba278867c2d1dc56b9f03fb96868d460414484050feb3a27fef6d628fa457 not found: ID does not exist" containerID="d53ba278867c2d1dc56b9f03fb96868d460414484050feb3a27fef6d628fa457" Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.598095 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d53ba278867c2d1dc56b9f03fb96868d460414484050feb3a27fef6d628fa457"} err="failed to get container status \"d53ba278867c2d1dc56b9f03fb96868d460414484050feb3a27fef6d628fa457\": rpc error: code = NotFound desc = could not find container \"d53ba278867c2d1dc56b9f03fb96868d460414484050feb3a27fef6d628fa457\": container with ID starting with d53ba278867c2d1dc56b9f03fb96868d460414484050feb3a27fef6d628fa457 not found: ID does not exist" Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.600930 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-wzsch"] Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.609347 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-wzsch"] Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.859793 4823 patch_prober.go:28] interesting pod/console-f9d7485db-wzsch container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 06:38:47 crc kubenswrapper[4823]: I1206 06:38:47.860134 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-wzsch" podUID="e802aa0a-cd13-43df-be69-40b0bca7200f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 06 06:38:48 crc kubenswrapper[4823]: I1206 06:38:48.578965 4823 generic.go:334] "Generic (PLEG): container finished" podID="5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901" containerID="d356491f4f9db859f1bfbdcf6d383e5b059511b113c882ee041f16f82219bf47" exitCode=0 Dec 06 06:38:48 crc kubenswrapper[4823]: I1206 06:38:48.579003 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft" event={"ID":"5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901","Type":"ContainerDied","Data":"d356491f4f9db859f1bfbdcf6d383e5b059511b113c882ee041f16f82219bf47"} Dec 06 06:38:49 crc kubenswrapper[4823]: I1206 06:38:49.149110 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e802aa0a-cd13-43df-be69-40b0bca7200f" path="/var/lib/kubelet/pods/e802aa0a-cd13-43df-be69-40b0bca7200f/volumes" Dec 06 06:38:49 crc kubenswrapper[4823]: I1206 06:38:49.883273 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft" Dec 06 06:38:49 crc kubenswrapper[4823]: I1206 06:38:49.971727 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901-util\") pod \"5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901\" (UID: \"5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901\") " Dec 06 06:38:49 crc kubenswrapper[4823]: I1206 06:38:49.971771 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901-bundle\") pod \"5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901\" (UID: \"5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901\") " Dec 06 06:38:49 crc kubenswrapper[4823]: I1206 06:38:49.971829 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq84p\" (UniqueName: \"kubernetes.io/projected/5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901-kube-api-access-lq84p\") pod \"5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901\" (UID: \"5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901\") " Dec 06 06:38:49 crc kubenswrapper[4823]: I1206 06:38:49.972969 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901-bundle" (OuterVolumeSpecName: "bundle") pod "5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901" (UID: "5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:38:49 crc kubenswrapper[4823]: I1206 06:38:49.977264 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901-kube-api-access-lq84p" (OuterVolumeSpecName: "kube-api-access-lq84p") pod "5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901" (UID: "5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901"). InnerVolumeSpecName "kube-api-access-lq84p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:38:49 crc kubenswrapper[4823]: I1206 06:38:49.989393 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901-util" (OuterVolumeSpecName: "util") pod "5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901" (UID: "5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:38:50 crc kubenswrapper[4823]: I1206 06:38:50.073712 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq84p\" (UniqueName: \"kubernetes.io/projected/5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901-kube-api-access-lq84p\") on node \"crc\" DevicePath \"\"" Dec 06 06:38:50 crc kubenswrapper[4823]: I1206 06:38:50.073766 4823 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901-util\") on node \"crc\" DevicePath \"\"" Dec 06 06:38:50 crc kubenswrapper[4823]: I1206 06:38:50.073778 4823 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:38:50 crc kubenswrapper[4823]: I1206 06:38:50.598213 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft" event={"ID":"5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901","Type":"ContainerDied","Data":"36fea85f5ddda776e658a827407dd26765cc04e0b462d4caf3f7ccc582cd8c37"} Dec 06 06:38:50 crc kubenswrapper[4823]: I1206 06:38:50.598257 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36fea85f5ddda776e658a827407dd26765cc04e0b462d4caf3f7ccc582cd8c37" Dec 06 06:38:50 crc kubenswrapper[4823]: I1206 06:38:50.598288 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft" Dec 06 06:38:58 crc kubenswrapper[4823]: I1206 06:38:58.902750 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-648d7bc7c7-lfcwj"] Dec 06 06:38:58 crc kubenswrapper[4823]: E1206 06:38:58.904643 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901" containerName="pull" Dec 06 06:38:58 crc kubenswrapper[4823]: I1206 06:38:58.904771 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901" containerName="pull" Dec 06 06:38:58 crc kubenswrapper[4823]: E1206 06:38:58.904886 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901" containerName="extract" Dec 06 06:38:58 crc kubenswrapper[4823]: I1206 06:38:58.904983 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901" containerName="extract" Dec 06 06:38:58 crc kubenswrapper[4823]: E1206 06:38:58.905096 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901" containerName="util" Dec 06 06:38:58 crc kubenswrapper[4823]: I1206 06:38:58.905180 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901" containerName="util" Dec 06 06:38:58 crc kubenswrapper[4823]: E1206 06:38:58.905284 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e802aa0a-cd13-43df-be69-40b0bca7200f" containerName="console" Dec 06 06:38:58 crc kubenswrapper[4823]: I1206 06:38:58.905370 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e802aa0a-cd13-43df-be69-40b0bca7200f" containerName="console" Dec 06 06:38:58 crc kubenswrapper[4823]: I1206 06:38:58.905606 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901" containerName="extract" Dec 06 06:38:58 crc kubenswrapper[4823]: I1206 06:38:58.905710 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="e802aa0a-cd13-43df-be69-40b0bca7200f" containerName="console" Dec 06 06:38:58 crc kubenswrapper[4823]: I1206 06:38:58.906609 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-648d7bc7c7-lfcwj" Dec 06 06:38:58 crc kubenswrapper[4823]: I1206 06:38:58.911580 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 06 06:38:58 crc kubenswrapper[4823]: I1206 06:38:58.912040 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 06 06:38:58 crc kubenswrapper[4823]: I1206 06:38:58.912137 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-k85cd" Dec 06 06:38:58 crc kubenswrapper[4823]: I1206 06:38:58.912356 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 06 06:38:58 crc kubenswrapper[4823]: I1206 06:38:58.912407 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 06 06:38:58 crc kubenswrapper[4823]: I1206 06:38:58.946793 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-648d7bc7c7-lfcwj"] Dec 06 06:38:58 crc kubenswrapper[4823]: I1206 06:38:58.989688 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/72976e23-4d5d-42d6-9667-ccf6e45411a4-apiservice-cert\") pod \"metallb-operator-controller-manager-648d7bc7c7-lfcwj\" (UID: \"72976e23-4d5d-42d6-9667-ccf6e45411a4\") " pod="metallb-system/metallb-operator-controller-manager-648d7bc7c7-lfcwj" Dec 06 06:38:58 crc kubenswrapper[4823]: I1206 06:38:58.989880 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbcld\" (UniqueName: \"kubernetes.io/projected/72976e23-4d5d-42d6-9667-ccf6e45411a4-kube-api-access-jbcld\") pod \"metallb-operator-controller-manager-648d7bc7c7-lfcwj\" (UID: \"72976e23-4d5d-42d6-9667-ccf6e45411a4\") " pod="metallb-system/metallb-operator-controller-manager-648d7bc7c7-lfcwj" Dec 06 06:38:58 crc kubenswrapper[4823]: I1206 06:38:58.989909 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/72976e23-4d5d-42d6-9667-ccf6e45411a4-webhook-cert\") pod \"metallb-operator-controller-manager-648d7bc7c7-lfcwj\" (UID: \"72976e23-4d5d-42d6-9667-ccf6e45411a4\") " pod="metallb-system/metallb-operator-controller-manager-648d7bc7c7-lfcwj" Dec 06 06:38:59 crc kubenswrapper[4823]: I1206 06:38:59.091127 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/72976e23-4d5d-42d6-9667-ccf6e45411a4-webhook-cert\") pod \"metallb-operator-controller-manager-648d7bc7c7-lfcwj\" (UID: \"72976e23-4d5d-42d6-9667-ccf6e45411a4\") " pod="metallb-system/metallb-operator-controller-manager-648d7bc7c7-lfcwj" Dec 06 06:38:59 crc kubenswrapper[4823]: I1206 06:38:59.091173 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbcld\" (UniqueName: \"kubernetes.io/projected/72976e23-4d5d-42d6-9667-ccf6e45411a4-kube-api-access-jbcld\") pod \"metallb-operator-controller-manager-648d7bc7c7-lfcwj\" (UID: \"72976e23-4d5d-42d6-9667-ccf6e45411a4\") " pod="metallb-system/metallb-operator-controller-manager-648d7bc7c7-lfcwj" Dec 06 06:38:59 crc kubenswrapper[4823]: I1206 06:38:59.091196 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/72976e23-4d5d-42d6-9667-ccf6e45411a4-apiservice-cert\") pod \"metallb-operator-controller-manager-648d7bc7c7-lfcwj\" (UID: \"72976e23-4d5d-42d6-9667-ccf6e45411a4\") " pod="metallb-system/metallb-operator-controller-manager-648d7bc7c7-lfcwj" Dec 06 06:38:59 crc kubenswrapper[4823]: I1206 06:38:59.098833 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/72976e23-4d5d-42d6-9667-ccf6e45411a4-apiservice-cert\") pod \"metallb-operator-controller-manager-648d7bc7c7-lfcwj\" (UID: \"72976e23-4d5d-42d6-9667-ccf6e45411a4\") " pod="metallb-system/metallb-operator-controller-manager-648d7bc7c7-lfcwj" Dec 06 06:38:59 crc kubenswrapper[4823]: I1206 06:38:59.098870 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/72976e23-4d5d-42d6-9667-ccf6e45411a4-webhook-cert\") pod \"metallb-operator-controller-manager-648d7bc7c7-lfcwj\" (UID: \"72976e23-4d5d-42d6-9667-ccf6e45411a4\") " pod="metallb-system/metallb-operator-controller-manager-648d7bc7c7-lfcwj" Dec 06 06:38:59 crc kubenswrapper[4823]: I1206 06:38:59.114331 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbcld\" (UniqueName: \"kubernetes.io/projected/72976e23-4d5d-42d6-9667-ccf6e45411a4-kube-api-access-jbcld\") pod \"metallb-operator-controller-manager-648d7bc7c7-lfcwj\" (UID: \"72976e23-4d5d-42d6-9667-ccf6e45411a4\") " pod="metallb-system/metallb-operator-controller-manager-648d7bc7c7-lfcwj" Dec 06 06:38:59 crc kubenswrapper[4823]: I1206 06:38:59.230167 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-k85cd" Dec 06 06:38:59 crc kubenswrapper[4823]: I1206 06:38:59.232572 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-58db4d7bbd-nw4xn"] Dec 06 06:38:59 crc kubenswrapper[4823]: I1206 06:38:59.233462 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-58db4d7bbd-nw4xn" Dec 06 06:38:59 crc kubenswrapper[4823]: I1206 06:38:59.236190 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-648d7bc7c7-lfcwj" Dec 06 06:38:59 crc kubenswrapper[4823]: I1206 06:38:59.237343 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-d4v9n" Dec 06 06:38:59 crc kubenswrapper[4823]: I1206 06:38:59.237748 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 06 06:38:59 crc kubenswrapper[4823]: I1206 06:38:59.248720 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 06 06:38:59 crc kubenswrapper[4823]: I1206 06:38:59.266969 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-58db4d7bbd-nw4xn"] Dec 06 06:38:59 crc kubenswrapper[4823]: I1206 06:38:59.293766 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/80b20d2f-135b-4bd8-8236-19429964c077-webhook-cert\") pod \"metallb-operator-webhook-server-58db4d7bbd-nw4xn\" (UID: \"80b20d2f-135b-4bd8-8236-19429964c077\") " pod="metallb-system/metallb-operator-webhook-server-58db4d7bbd-nw4xn" Dec 06 06:38:59 crc kubenswrapper[4823]: I1206 06:38:59.294188 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/80b20d2f-135b-4bd8-8236-19429964c077-apiservice-cert\") pod \"metallb-operator-webhook-server-58db4d7bbd-nw4xn\" (UID: \"80b20d2f-135b-4bd8-8236-19429964c077\") " pod="metallb-system/metallb-operator-webhook-server-58db4d7bbd-nw4xn" Dec 06 06:38:59 crc kubenswrapper[4823]: I1206 06:38:59.294216 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gbhd\" (UniqueName: \"kubernetes.io/projected/80b20d2f-135b-4bd8-8236-19429964c077-kube-api-access-6gbhd\") pod \"metallb-operator-webhook-server-58db4d7bbd-nw4xn\" (UID: \"80b20d2f-135b-4bd8-8236-19429964c077\") " pod="metallb-system/metallb-operator-webhook-server-58db4d7bbd-nw4xn" Dec 06 06:38:59 crc kubenswrapper[4823]: I1206 06:38:59.396180 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/80b20d2f-135b-4bd8-8236-19429964c077-apiservice-cert\") pod \"metallb-operator-webhook-server-58db4d7bbd-nw4xn\" (UID: \"80b20d2f-135b-4bd8-8236-19429964c077\") " pod="metallb-system/metallb-operator-webhook-server-58db4d7bbd-nw4xn" Dec 06 06:38:59 crc kubenswrapper[4823]: I1206 06:38:59.396267 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gbhd\" (UniqueName: \"kubernetes.io/projected/80b20d2f-135b-4bd8-8236-19429964c077-kube-api-access-6gbhd\") pod \"metallb-operator-webhook-server-58db4d7bbd-nw4xn\" (UID: \"80b20d2f-135b-4bd8-8236-19429964c077\") " pod="metallb-system/metallb-operator-webhook-server-58db4d7bbd-nw4xn" Dec 06 06:38:59 crc kubenswrapper[4823]: I1206 06:38:59.396404 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/80b20d2f-135b-4bd8-8236-19429964c077-webhook-cert\") pod \"metallb-operator-webhook-server-58db4d7bbd-nw4xn\" (UID: \"80b20d2f-135b-4bd8-8236-19429964c077\") " pod="metallb-system/metallb-operator-webhook-server-58db4d7bbd-nw4xn" Dec 06 06:38:59 crc kubenswrapper[4823]: I1206 06:38:59.402194 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/80b20d2f-135b-4bd8-8236-19429964c077-webhook-cert\") pod \"metallb-operator-webhook-server-58db4d7bbd-nw4xn\" (UID: \"80b20d2f-135b-4bd8-8236-19429964c077\") " pod="metallb-system/metallb-operator-webhook-server-58db4d7bbd-nw4xn" Dec 06 06:38:59 crc kubenswrapper[4823]: I1206 06:38:59.416329 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gbhd\" (UniqueName: \"kubernetes.io/projected/80b20d2f-135b-4bd8-8236-19429964c077-kube-api-access-6gbhd\") pod \"metallb-operator-webhook-server-58db4d7bbd-nw4xn\" (UID: \"80b20d2f-135b-4bd8-8236-19429964c077\") " pod="metallb-system/metallb-operator-webhook-server-58db4d7bbd-nw4xn" Dec 06 06:38:59 crc kubenswrapper[4823]: I1206 06:38:59.425076 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/80b20d2f-135b-4bd8-8236-19429964c077-apiservice-cert\") pod \"metallb-operator-webhook-server-58db4d7bbd-nw4xn\" (UID: \"80b20d2f-135b-4bd8-8236-19429964c077\") " pod="metallb-system/metallb-operator-webhook-server-58db4d7bbd-nw4xn" Dec 06 06:38:59 crc kubenswrapper[4823]: I1206 06:38:59.558656 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-58db4d7bbd-nw4xn" Dec 06 06:38:59 crc kubenswrapper[4823]: I1206 06:38:59.575325 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-648d7bc7c7-lfcwj"] Dec 06 06:38:59 crc kubenswrapper[4823]: I1206 06:38:59.705806 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-648d7bc7c7-lfcwj" event={"ID":"72976e23-4d5d-42d6-9667-ccf6e45411a4","Type":"ContainerStarted","Data":"9bdfaa8a8aa31859d56283c348fd797991173d83bd07b7e574f43fb9b7f1f3a0"} Dec 06 06:39:00 crc kubenswrapper[4823]: I1206 06:39:00.005641 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-58db4d7bbd-nw4xn"] Dec 06 06:39:00 crc kubenswrapper[4823]: W1206 06:39:00.010003 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80b20d2f_135b_4bd8_8236_19429964c077.slice/crio-9285953fb8f7d319e94e543ec417694335435b35d2acae16a9f3adc8ab152f81 WatchSource:0}: Error finding container 9285953fb8f7d319e94e543ec417694335435b35d2acae16a9f3adc8ab152f81: Status 404 returned error can't find the container with id 9285953fb8f7d319e94e543ec417694335435b35d2acae16a9f3adc8ab152f81 Dec 06 06:39:00 crc kubenswrapper[4823]: I1206 06:39:00.714046 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-58db4d7bbd-nw4xn" event={"ID":"80b20d2f-135b-4bd8-8236-19429964c077","Type":"ContainerStarted","Data":"9285953fb8f7d319e94e543ec417694335435b35d2acae16a9f3adc8ab152f81"} Dec 06 06:39:02 crc kubenswrapper[4823]: I1206 06:39:02.733494 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-648d7bc7c7-lfcwj" event={"ID":"72976e23-4d5d-42d6-9667-ccf6e45411a4","Type":"ContainerStarted","Data":"518e8934facf9aedd5d21d59d836167f2935efee814f69ce8af7b4ad8c28d91e"} Dec 06 06:39:02 crc kubenswrapper[4823]: I1206 06:39:02.734175 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-648d7bc7c7-lfcwj" Dec 06 06:39:02 crc kubenswrapper[4823]: I1206 06:39:02.762422 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-648d7bc7c7-lfcwj" podStartSLOduration=1.863338158 podStartE2EDuration="4.762400691s" podCreationTimestamp="2025-12-06 06:38:58 +0000 UTC" firstStartedPulling="2025-12-06 06:38:59.606472451 +0000 UTC m=+840.892224421" lastFinishedPulling="2025-12-06 06:39:02.505534994 +0000 UTC m=+843.791286954" observedRunningTime="2025-12-06 06:39:02.755195262 +0000 UTC m=+844.040947222" watchObservedRunningTime="2025-12-06 06:39:02.762400691 +0000 UTC m=+844.048152651" Dec 06 06:39:05 crc kubenswrapper[4823]: I1206 06:39:05.751303 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-58db4d7bbd-nw4xn" event={"ID":"80b20d2f-135b-4bd8-8236-19429964c077","Type":"ContainerStarted","Data":"a86991b8097c9929f93100e636b3dbc9d84aeca2bfe560c8f2941b5fe0e0a0c7"} Dec 06 06:39:05 crc kubenswrapper[4823]: I1206 06:39:05.751602 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-58db4d7bbd-nw4xn" Dec 06 06:39:05 crc kubenswrapper[4823]: I1206 06:39:05.774316 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-58db4d7bbd-nw4xn" podStartSLOduration=2.009129742 podStartE2EDuration="6.774297595s" podCreationTimestamp="2025-12-06 06:38:59 +0000 UTC" firstStartedPulling="2025-12-06 06:39:00.01273968 +0000 UTC m=+841.298491640" lastFinishedPulling="2025-12-06 06:39:04.777907533 +0000 UTC m=+846.063659493" observedRunningTime="2025-12-06 06:39:05.771522495 +0000 UTC m=+847.057274465" watchObservedRunningTime="2025-12-06 06:39:05.774297595 +0000 UTC m=+847.060049555" Dec 06 06:39:19 crc kubenswrapper[4823]: I1206 06:39:19.564290 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-58db4d7bbd-nw4xn" Dec 06 06:39:39 crc kubenswrapper[4823]: I1206 06:39:39.238835 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-648d7bc7c7-lfcwj" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.135873 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-pw4mq"] Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.139169 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.150752 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.151118 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.152019 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-wjm5p" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.165898 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-lsjzk"] Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.167137 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lsjzk" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.168931 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.178798 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-lsjzk"] Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.179691 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb5ef3cd-9337-4665-945a-403b2619c53d-metrics-certs\") pod \"frr-k8s-pw4mq\" (UID: \"eb5ef3cd-9337-4665-945a-403b2619c53d\") " pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.179916 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/eb5ef3cd-9337-4665-945a-403b2619c53d-frr-startup\") pod \"frr-k8s-pw4mq\" (UID: \"eb5ef3cd-9337-4665-945a-403b2619c53d\") " pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.180054 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwsf6\" (UniqueName: \"kubernetes.io/projected/eb5ef3cd-9337-4665-945a-403b2619c53d-kube-api-access-wwsf6\") pod \"frr-k8s-pw4mq\" (UID: \"eb5ef3cd-9337-4665-945a-403b2619c53d\") " pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.180163 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/eb5ef3cd-9337-4665-945a-403b2619c53d-metrics\") pod \"frr-k8s-pw4mq\" (UID: \"eb5ef3cd-9337-4665-945a-403b2619c53d\") " pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.180311 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/eb5ef3cd-9337-4665-945a-403b2619c53d-frr-conf\") pod \"frr-k8s-pw4mq\" (UID: \"eb5ef3cd-9337-4665-945a-403b2619c53d\") " pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.180398 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/eb5ef3cd-9337-4665-945a-403b2619c53d-frr-sockets\") pod \"frr-k8s-pw4mq\" (UID: \"eb5ef3cd-9337-4665-945a-403b2619c53d\") " pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.180530 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/eb5ef3cd-9337-4665-945a-403b2619c53d-reloader\") pod \"frr-k8s-pw4mq\" (UID: \"eb5ef3cd-9337-4665-945a-403b2619c53d\") " pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.282357 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/eb5ef3cd-9337-4665-945a-403b2619c53d-reloader\") pod \"frr-k8s-pw4mq\" (UID: \"eb5ef3cd-9337-4665-945a-403b2619c53d\") " pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.282439 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb5ef3cd-9337-4665-945a-403b2619c53d-metrics-certs\") pod \"frr-k8s-pw4mq\" (UID: \"eb5ef3cd-9337-4665-945a-403b2619c53d\") " pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.282481 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/94cf4797-42d3-4c53-9d68-93210ba23378-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-lsjzk\" (UID: \"94cf4797-42d3-4c53-9d68-93210ba23378\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lsjzk" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.282508 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/eb5ef3cd-9337-4665-945a-403b2619c53d-frr-startup\") pod \"frr-k8s-pw4mq\" (UID: \"eb5ef3cd-9337-4665-945a-403b2619c53d\") " pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.282544 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwsf6\" (UniqueName: \"kubernetes.io/projected/eb5ef3cd-9337-4665-945a-403b2619c53d-kube-api-access-wwsf6\") pod \"frr-k8s-pw4mq\" (UID: \"eb5ef3cd-9337-4665-945a-403b2619c53d\") " pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.282588 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/eb5ef3cd-9337-4665-945a-403b2619c53d-metrics\") pod \"frr-k8s-pw4mq\" (UID: \"eb5ef3cd-9337-4665-945a-403b2619c53d\") " pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.282641 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx42p\" (UniqueName: \"kubernetes.io/projected/94cf4797-42d3-4c53-9d68-93210ba23378-kube-api-access-dx42p\") pod \"frr-k8s-webhook-server-7fcb986d4-lsjzk\" (UID: \"94cf4797-42d3-4c53-9d68-93210ba23378\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lsjzk" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.282696 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/eb5ef3cd-9337-4665-945a-403b2619c53d-frr-conf\") pod \"frr-k8s-pw4mq\" (UID: \"eb5ef3cd-9337-4665-945a-403b2619c53d\") " pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.282742 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/eb5ef3cd-9337-4665-945a-403b2619c53d-frr-sockets\") pod \"frr-k8s-pw4mq\" (UID: \"eb5ef3cd-9337-4665-945a-403b2619c53d\") " pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.283228 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/eb5ef3cd-9337-4665-945a-403b2619c53d-frr-sockets\") pod \"frr-k8s-pw4mq\" (UID: \"eb5ef3cd-9337-4665-945a-403b2619c53d\") " pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.283488 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/eb5ef3cd-9337-4665-945a-403b2619c53d-reloader\") pod \"frr-k8s-pw4mq\" (UID: \"eb5ef3cd-9337-4665-945a-403b2619c53d\") " pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.283894 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/eb5ef3cd-9337-4665-945a-403b2619c53d-metrics\") pod \"frr-k8s-pw4mq\" (UID: \"eb5ef3cd-9337-4665-945a-403b2619c53d\") " pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.283909 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/eb5ef3cd-9337-4665-945a-403b2619c53d-frr-conf\") pod \"frr-k8s-pw4mq\" (UID: \"eb5ef3cd-9337-4665-945a-403b2619c53d\") " pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.284789 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/eb5ef3cd-9337-4665-945a-403b2619c53d-frr-startup\") pod \"frr-k8s-pw4mq\" (UID: \"eb5ef3cd-9337-4665-945a-403b2619c53d\") " pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.292339 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb5ef3cd-9337-4665-945a-403b2619c53d-metrics-certs\") pod \"frr-k8s-pw4mq\" (UID: \"eb5ef3cd-9337-4665-945a-403b2619c53d\") " pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.295994 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-r9ml4"] Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.297206 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-r9ml4" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.300117 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.300225 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.300210 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-2fnnc" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.302310 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.304909 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwsf6\" (UniqueName: \"kubernetes.io/projected/eb5ef3cd-9337-4665-945a-403b2619c53d-kube-api-access-wwsf6\") pod \"frr-k8s-pw4mq\" (UID: \"eb5ef3cd-9337-4665-945a-403b2619c53d\") " pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.331460 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-jg4v8"] Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.332842 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-jg4v8" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.334613 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.340283 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-jg4v8"] Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.384206 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/94cf4797-42d3-4c53-9d68-93210ba23378-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-lsjzk\" (UID: \"94cf4797-42d3-4c53-9d68-93210ba23378\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lsjzk" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.384283 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx42p\" (UniqueName: \"kubernetes.io/projected/94cf4797-42d3-4c53-9d68-93210ba23378-kube-api-access-dx42p\") pod \"frr-k8s-webhook-server-7fcb986d4-lsjzk\" (UID: \"94cf4797-42d3-4c53-9d68-93210ba23378\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lsjzk" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.384310 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c12f95f-8514-4b08-8177-d95f8b0bc24d-cert\") pod \"controller-f8648f98b-jg4v8\" (UID: \"3c12f95f-8514-4b08-8177-d95f8b0bc24d\") " pod="metallb-system/controller-f8648f98b-jg4v8" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.384337 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a94e23f2-d423-4414-9eca-532b936de8ae-metallb-excludel2\") pod \"speaker-r9ml4\" (UID: \"a94e23f2-d423-4414-9eca-532b936de8ae\") " pod="metallb-system/speaker-r9ml4" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.384356 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj92l\" (UniqueName: \"kubernetes.io/projected/3c12f95f-8514-4b08-8177-d95f8b0bc24d-kube-api-access-qj92l\") pod \"controller-f8648f98b-jg4v8\" (UID: \"3c12f95f-8514-4b08-8177-d95f8b0bc24d\") " pod="metallb-system/controller-f8648f98b-jg4v8" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.384371 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a94e23f2-d423-4414-9eca-532b936de8ae-metrics-certs\") pod \"speaker-r9ml4\" (UID: \"a94e23f2-d423-4414-9eca-532b936de8ae\") " pod="metallb-system/speaker-r9ml4" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.384397 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a94e23f2-d423-4414-9eca-532b936de8ae-memberlist\") pod \"speaker-r9ml4\" (UID: \"a94e23f2-d423-4414-9eca-532b936de8ae\") " pod="metallb-system/speaker-r9ml4" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.384415 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5tj8\" (UniqueName: \"kubernetes.io/projected/a94e23f2-d423-4414-9eca-532b936de8ae-kube-api-access-t5tj8\") pod \"speaker-r9ml4\" (UID: \"a94e23f2-d423-4414-9eca-532b936de8ae\") " pod="metallb-system/speaker-r9ml4" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.384443 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c12f95f-8514-4b08-8177-d95f8b0bc24d-metrics-certs\") pod \"controller-f8648f98b-jg4v8\" (UID: \"3c12f95f-8514-4b08-8177-d95f8b0bc24d\") " pod="metallb-system/controller-f8648f98b-jg4v8" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.387595 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/94cf4797-42d3-4c53-9d68-93210ba23378-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-lsjzk\" (UID: \"94cf4797-42d3-4c53-9d68-93210ba23378\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lsjzk" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.406856 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx42p\" (UniqueName: \"kubernetes.io/projected/94cf4797-42d3-4c53-9d68-93210ba23378-kube-api-access-dx42p\") pod \"frr-k8s-webhook-server-7fcb986d4-lsjzk\" (UID: \"94cf4797-42d3-4c53-9d68-93210ba23378\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lsjzk" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.466227 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.486196 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c12f95f-8514-4b08-8177-d95f8b0bc24d-cert\") pod \"controller-f8648f98b-jg4v8\" (UID: \"3c12f95f-8514-4b08-8177-d95f8b0bc24d\") " pod="metallb-system/controller-f8648f98b-jg4v8" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.486510 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a94e23f2-d423-4414-9eca-532b936de8ae-metallb-excludel2\") pod \"speaker-r9ml4\" (UID: \"a94e23f2-d423-4414-9eca-532b936de8ae\") " pod="metallb-system/speaker-r9ml4" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.486535 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj92l\" (UniqueName: \"kubernetes.io/projected/3c12f95f-8514-4b08-8177-d95f8b0bc24d-kube-api-access-qj92l\") pod \"controller-f8648f98b-jg4v8\" (UID: \"3c12f95f-8514-4b08-8177-d95f8b0bc24d\") " pod="metallb-system/controller-f8648f98b-jg4v8" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.486553 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a94e23f2-d423-4414-9eca-532b936de8ae-metrics-certs\") pod \"speaker-r9ml4\" (UID: \"a94e23f2-d423-4414-9eca-532b936de8ae\") " pod="metallb-system/speaker-r9ml4" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.486577 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a94e23f2-d423-4414-9eca-532b936de8ae-memberlist\") pod \"speaker-r9ml4\" (UID: \"a94e23f2-d423-4414-9eca-532b936de8ae\") " pod="metallb-system/speaker-r9ml4" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.486592 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5tj8\" (UniqueName: \"kubernetes.io/projected/a94e23f2-d423-4414-9eca-532b936de8ae-kube-api-access-t5tj8\") pod \"speaker-r9ml4\" (UID: \"a94e23f2-d423-4414-9eca-532b936de8ae\") " pod="metallb-system/speaker-r9ml4" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.486621 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c12f95f-8514-4b08-8177-d95f8b0bc24d-metrics-certs\") pod \"controller-f8648f98b-jg4v8\" (UID: \"3c12f95f-8514-4b08-8177-d95f8b0bc24d\") " pod="metallb-system/controller-f8648f98b-jg4v8" Dec 06 06:39:40 crc kubenswrapper[4823]: E1206 06:39:40.486762 4823 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 06 06:39:40 crc kubenswrapper[4823]: E1206 06:39:40.486796 4823 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Dec 06 06:39:40 crc kubenswrapper[4823]: E1206 06:39:40.486842 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a94e23f2-d423-4414-9eca-532b936de8ae-memberlist podName:a94e23f2-d423-4414-9eca-532b936de8ae nodeName:}" failed. No retries permitted until 2025-12-06 06:39:40.986820653 +0000 UTC m=+882.272572613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a94e23f2-d423-4414-9eca-532b936de8ae-memberlist") pod "speaker-r9ml4" (UID: "a94e23f2-d423-4414-9eca-532b936de8ae") : secret "metallb-memberlist" not found Dec 06 06:39:40 crc kubenswrapper[4823]: E1206 06:39:40.486864 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c12f95f-8514-4b08-8177-d95f8b0bc24d-metrics-certs podName:3c12f95f-8514-4b08-8177-d95f8b0bc24d nodeName:}" failed. No retries permitted until 2025-12-06 06:39:40.986855114 +0000 UTC m=+882.272607074 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3c12f95f-8514-4b08-8177-d95f8b0bc24d-metrics-certs") pod "controller-f8648f98b-jg4v8" (UID: "3c12f95f-8514-4b08-8177-d95f8b0bc24d") : secret "controller-certs-secret" not found Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.487503 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a94e23f2-d423-4414-9eca-532b936de8ae-metallb-excludel2\") pod \"speaker-r9ml4\" (UID: \"a94e23f2-d423-4414-9eca-532b936de8ae\") " pod="metallb-system/speaker-r9ml4" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.489937 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c12f95f-8514-4b08-8177-d95f8b0bc24d-cert\") pod \"controller-f8648f98b-jg4v8\" (UID: \"3c12f95f-8514-4b08-8177-d95f8b0bc24d\") " pod="metallb-system/controller-f8648f98b-jg4v8" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.491076 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a94e23f2-d423-4414-9eca-532b936de8ae-metrics-certs\") pod \"speaker-r9ml4\" (UID: \"a94e23f2-d423-4414-9eca-532b936de8ae\") " pod="metallb-system/speaker-r9ml4" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.513286 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5tj8\" (UniqueName: \"kubernetes.io/projected/a94e23f2-d423-4414-9eca-532b936de8ae-kube-api-access-t5tj8\") pod \"speaker-r9ml4\" (UID: \"a94e23f2-d423-4414-9eca-532b936de8ae\") " pod="metallb-system/speaker-r9ml4" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.513561 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj92l\" (UniqueName: \"kubernetes.io/projected/3c12f95f-8514-4b08-8177-d95f8b0bc24d-kube-api-access-qj92l\") pod \"controller-f8648f98b-jg4v8\" (UID: \"3c12f95f-8514-4b08-8177-d95f8b0bc24d\") " pod="metallb-system/controller-f8648f98b-jg4v8" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.649772 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lsjzk" Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.938166 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-lsjzk"] Dec 06 06:39:40 crc kubenswrapper[4823]: W1206 06:39:40.941979 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94cf4797_42d3_4c53_9d68_93210ba23378.slice/crio-0b17c9c8076a958c2bebdf99449521780c75973fc8fe9f91c519a0c1f7942ad3 WatchSource:0}: Error finding container 0b17c9c8076a958c2bebdf99449521780c75973fc8fe9f91c519a0c1f7942ad3: Status 404 returned error can't find the container with id 0b17c9c8076a958c2bebdf99449521780c75973fc8fe9f91c519a0c1f7942ad3 Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.951676 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pw4mq" event={"ID":"eb5ef3cd-9337-4665-945a-403b2619c53d","Type":"ContainerStarted","Data":"30c6722eb34844663a98e6f1d05a2e17d0a8b9f50bf1ace86e487e708fba50d6"} Dec 06 06:39:40 crc kubenswrapper[4823]: I1206 06:39:40.952483 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lsjzk" event={"ID":"94cf4797-42d3-4c53-9d68-93210ba23378","Type":"ContainerStarted","Data":"0b17c9c8076a958c2bebdf99449521780c75973fc8fe9f91c519a0c1f7942ad3"} Dec 06 06:39:41 crc kubenswrapper[4823]: I1206 06:39:41.056200 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a94e23f2-d423-4414-9eca-532b936de8ae-memberlist\") pod \"speaker-r9ml4\" (UID: \"a94e23f2-d423-4414-9eca-532b936de8ae\") " pod="metallb-system/speaker-r9ml4" Dec 06 06:39:41 crc kubenswrapper[4823]: I1206 06:39:41.056267 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c12f95f-8514-4b08-8177-d95f8b0bc24d-metrics-certs\") pod \"controller-f8648f98b-jg4v8\" (UID: \"3c12f95f-8514-4b08-8177-d95f8b0bc24d\") " pod="metallb-system/controller-f8648f98b-jg4v8" Dec 06 06:39:41 crc kubenswrapper[4823]: E1206 06:39:41.056451 4823 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 06 06:39:41 crc kubenswrapper[4823]: E1206 06:39:41.056621 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a94e23f2-d423-4414-9eca-532b936de8ae-memberlist podName:a94e23f2-d423-4414-9eca-532b936de8ae nodeName:}" failed. No retries permitted until 2025-12-06 06:39:42.056595121 +0000 UTC m=+883.342347081 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a94e23f2-d423-4414-9eca-532b936de8ae-memberlist") pod "speaker-r9ml4" (UID: "a94e23f2-d423-4414-9eca-532b936de8ae") : secret "metallb-memberlist" not found Dec 06 06:39:41 crc kubenswrapper[4823]: I1206 06:39:41.062423 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c12f95f-8514-4b08-8177-d95f8b0bc24d-metrics-certs\") pod \"controller-f8648f98b-jg4v8\" (UID: \"3c12f95f-8514-4b08-8177-d95f8b0bc24d\") " pod="metallb-system/controller-f8648f98b-jg4v8" Dec 06 06:39:41 crc kubenswrapper[4823]: I1206 06:39:41.257848 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-jg4v8" Dec 06 06:39:41 crc kubenswrapper[4823]: I1206 06:39:41.878329 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-jg4v8"] Dec 06 06:39:41 crc kubenswrapper[4823]: I1206 06:39:41.966140 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-jg4v8" event={"ID":"3c12f95f-8514-4b08-8177-d95f8b0bc24d","Type":"ContainerStarted","Data":"93df77800890a794ed43f574f38cdb7f3cbe56840ab715a3b24835b1d3cf2532"} Dec 06 06:39:42 crc kubenswrapper[4823]: I1206 06:39:42.067210 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a94e23f2-d423-4414-9eca-532b936de8ae-memberlist\") pod \"speaker-r9ml4\" (UID: \"a94e23f2-d423-4414-9eca-532b936de8ae\") " pod="metallb-system/speaker-r9ml4" Dec 06 06:39:42 crc kubenswrapper[4823]: E1206 06:39:42.067367 4823 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 06 06:39:42 crc kubenswrapper[4823]: E1206 06:39:42.067442 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a94e23f2-d423-4414-9eca-532b936de8ae-memberlist podName:a94e23f2-d423-4414-9eca-532b936de8ae nodeName:}" failed. No retries permitted until 2025-12-06 06:39:44.067416229 +0000 UTC m=+885.353168189 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a94e23f2-d423-4414-9eca-532b936de8ae-memberlist") pod "speaker-r9ml4" (UID: "a94e23f2-d423-4414-9eca-532b936de8ae") : secret "metallb-memberlist" not found Dec 06 06:39:42 crc kubenswrapper[4823]: I1206 06:39:42.982336 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-jg4v8" event={"ID":"3c12f95f-8514-4b08-8177-d95f8b0bc24d","Type":"ContainerStarted","Data":"1ddf5c0d4d1d2d9cd817e07ccafa204f9036f74d54d9188843c2ba14ce02af90"} Dec 06 06:39:42 crc kubenswrapper[4823]: I1206 06:39:42.982689 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-jg4v8" event={"ID":"3c12f95f-8514-4b08-8177-d95f8b0bc24d","Type":"ContainerStarted","Data":"efc7fe0f448df35e640db72aa7e84417b619bb2b22ea9016289b5686116c4c42"} Dec 06 06:39:42 crc kubenswrapper[4823]: I1206 06:39:42.982731 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-jg4v8" Dec 06 06:39:43 crc kubenswrapper[4823]: I1206 06:39:43.004815 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-jg4v8" podStartSLOduration=3.004798929 podStartE2EDuration="3.004798929s" podCreationTimestamp="2025-12-06 06:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:39:43.001224546 +0000 UTC m=+884.286976506" watchObservedRunningTime="2025-12-06 06:39:43.004798929 +0000 UTC m=+884.290550889" Dec 06 06:39:44 crc kubenswrapper[4823]: I1206 06:39:44.076063 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a94e23f2-d423-4414-9eca-532b936de8ae-memberlist\") pod \"speaker-r9ml4\" (UID: \"a94e23f2-d423-4414-9eca-532b936de8ae\") " pod="metallb-system/speaker-r9ml4" Dec 06 06:39:44 crc kubenswrapper[4823]: I1206 06:39:44.084391 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a94e23f2-d423-4414-9eca-532b936de8ae-memberlist\") pod \"speaker-r9ml4\" (UID: \"a94e23f2-d423-4414-9eca-532b936de8ae\") " pod="metallb-system/speaker-r9ml4" Dec 06 06:39:44 crc kubenswrapper[4823]: I1206 06:39:44.249831 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-r9ml4" Dec 06 06:39:44 crc kubenswrapper[4823]: W1206 06:39:44.286593 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda94e23f2_d423_4414_9eca_532b936de8ae.slice/crio-a61ce3916fd345143a2cdbc385d2247a6a9193865dc8c25f3e17412c2e7c66f3 WatchSource:0}: Error finding container a61ce3916fd345143a2cdbc385d2247a6a9193865dc8c25f3e17412c2e7c66f3: Status 404 returned error can't find the container with id a61ce3916fd345143a2cdbc385d2247a6a9193865dc8c25f3e17412c2e7c66f3 Dec 06 06:39:44 crc kubenswrapper[4823]: I1206 06:39:44.998041 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-r9ml4" event={"ID":"a94e23f2-d423-4414-9eca-532b936de8ae","Type":"ContainerStarted","Data":"70ca258629f4ccf3791b4805749d122678febde333aa329b3aada078b64083b1"} Dec 06 06:39:44 crc kubenswrapper[4823]: I1206 06:39:44.998125 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-r9ml4" event={"ID":"a94e23f2-d423-4414-9eca-532b936de8ae","Type":"ContainerStarted","Data":"d3db82d04ec28a067a59b94970e617d9ac3e236385ed5a0e7aae56754463efc5"} Dec 06 06:39:44 crc kubenswrapper[4823]: I1206 06:39:44.998135 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-r9ml4" event={"ID":"a94e23f2-d423-4414-9eca-532b936de8ae","Type":"ContainerStarted","Data":"a61ce3916fd345143a2cdbc385d2247a6a9193865dc8c25f3e17412c2e7c66f3"} Dec 06 06:39:44 crc kubenswrapper[4823]: I1206 06:39:44.998933 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-r9ml4" Dec 06 06:39:45 crc kubenswrapper[4823]: I1206 06:39:45.014469 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-r9ml4" podStartSLOduration=5.014452601 podStartE2EDuration="5.014452601s" podCreationTimestamp="2025-12-06 06:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:39:45.011884657 +0000 UTC m=+886.297636617" watchObservedRunningTime="2025-12-06 06:39:45.014452601 +0000 UTC m=+886.300204561" Dec 06 06:39:53 crc kubenswrapper[4823]: I1206 06:39:53.344457 4823 generic.go:334] "Generic (PLEG): container finished" podID="eb5ef3cd-9337-4665-945a-403b2619c53d" containerID="33c74a81aea85ec3f04cdc7ab1ec4f92e79500965a04a243390ae0b28b65b1ad" exitCode=0 Dec 06 06:39:53 crc kubenswrapper[4823]: I1206 06:39:53.344502 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pw4mq" event={"ID":"eb5ef3cd-9337-4665-945a-403b2619c53d","Type":"ContainerDied","Data":"33c74a81aea85ec3f04cdc7ab1ec4f92e79500965a04a243390ae0b28b65b1ad"} Dec 06 06:39:53 crc kubenswrapper[4823]: I1206 06:39:53.346645 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lsjzk" event={"ID":"94cf4797-42d3-4c53-9d68-93210ba23378","Type":"ContainerStarted","Data":"ac1190fd44b3832a5bbda49cf675212a00b1f8613c3ae205b9c180332a9da0e0"} Dec 06 06:39:53 crc kubenswrapper[4823]: I1206 06:39:53.346790 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lsjzk" Dec 06 06:39:54 crc kubenswrapper[4823]: I1206 06:39:54.255044 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-r9ml4" Dec 06 06:39:54 crc kubenswrapper[4823]: I1206 06:39:54.275846 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lsjzk" podStartSLOduration=2.640378279 podStartE2EDuration="14.275826107s" podCreationTimestamp="2025-12-06 06:39:40 +0000 UTC" firstStartedPulling="2025-12-06 06:39:40.944520765 +0000 UTC m=+882.230272725" lastFinishedPulling="2025-12-06 06:39:52.579968593 +0000 UTC m=+893.865720553" observedRunningTime="2025-12-06 06:39:53.401152088 +0000 UTC m=+894.686904048" watchObservedRunningTime="2025-12-06 06:39:54.275826107 +0000 UTC m=+895.561578077" Dec 06 06:39:54 crc kubenswrapper[4823]: I1206 06:39:54.354077 4823 generic.go:334] "Generic (PLEG): container finished" podID="eb5ef3cd-9337-4665-945a-403b2619c53d" containerID="b84c0ae4ab0c07c234d944f7045cd4ba9bc5ca8401a3a1eb2b1cdef2492903d0" exitCode=0 Dec 06 06:39:54 crc kubenswrapper[4823]: I1206 06:39:54.355202 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pw4mq" event={"ID":"eb5ef3cd-9337-4665-945a-403b2619c53d","Type":"ContainerDied","Data":"b84c0ae4ab0c07c234d944f7045cd4ba9bc5ca8401a3a1eb2b1cdef2492903d0"} Dec 06 06:39:55 crc kubenswrapper[4823]: I1206 06:39:55.363758 4823 generic.go:334] "Generic (PLEG): container finished" podID="eb5ef3cd-9337-4665-945a-403b2619c53d" containerID="7b72a4ff0636961691cdb9bbcf5ee45033ae530140813ee3c92e25a54a63d03b" exitCode=0 Dec 06 06:39:55 crc kubenswrapper[4823]: I1206 06:39:55.363852 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pw4mq" event={"ID":"eb5ef3cd-9337-4665-945a-403b2619c53d","Type":"ContainerDied","Data":"7b72a4ff0636961691cdb9bbcf5ee45033ae530140813ee3c92e25a54a63d03b"} Dec 06 06:39:56 crc kubenswrapper[4823]: I1206 06:39:56.377623 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pw4mq" event={"ID":"eb5ef3cd-9337-4665-945a-403b2619c53d","Type":"ContainerStarted","Data":"cfa735b5b7e16d0d7a935e94d935e66df4486a3049c766fdad0ae0d794dd0664"} Dec 06 06:39:56 crc kubenswrapper[4823]: I1206 06:39:56.378384 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pw4mq" event={"ID":"eb5ef3cd-9337-4665-945a-403b2619c53d","Type":"ContainerStarted","Data":"72638029029d05def056011b95e4aa2d9a053e4a48fa8399a65d6619b7677030"} Dec 06 06:39:56 crc kubenswrapper[4823]: I1206 06:39:56.378397 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pw4mq" event={"ID":"eb5ef3cd-9337-4665-945a-403b2619c53d","Type":"ContainerStarted","Data":"8d63ffceed06fe7a19865a4bc5082590d1381fdb4e1c590177dee754747062a9"} Dec 06 06:39:56 crc kubenswrapper[4823]: I1206 06:39:56.378407 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pw4mq" event={"ID":"eb5ef3cd-9337-4665-945a-403b2619c53d","Type":"ContainerStarted","Data":"c9e5b72e7d3ec77ae6eb30c15d0b05fb8876ae2bb80a30df66e3d79861ed0fd5"} Dec 06 06:39:56 crc kubenswrapper[4823]: I1206 06:39:56.378417 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pw4mq" event={"ID":"eb5ef3cd-9337-4665-945a-403b2619c53d","Type":"ContainerStarted","Data":"4d7faddba667e3e4765626283627c9b85a14449fdd71d342cbbee2bedc967314"} Dec 06 06:39:57 crc kubenswrapper[4823]: I1206 06:39:57.391832 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pw4mq" event={"ID":"eb5ef3cd-9337-4665-945a-403b2619c53d","Type":"ContainerStarted","Data":"f69b692cdcca107859e25d76bcf1c34d5a31009389a2a47c28f4b7b81a734cd2"} Dec 06 06:39:57 crc kubenswrapper[4823]: I1206 06:39:57.392012 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:39:57 crc kubenswrapper[4823]: I1206 06:39:57.407545 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-z8zck"] Dec 06 06:39:57 crc kubenswrapper[4823]: I1206 06:39:57.408732 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-z8zck" Dec 06 06:39:57 crc kubenswrapper[4823]: I1206 06:39:57.410547 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-q46r6" Dec 06 06:39:57 crc kubenswrapper[4823]: I1206 06:39:57.410871 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 06 06:39:57 crc kubenswrapper[4823]: I1206 06:39:57.410957 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 06 06:39:57 crc kubenswrapper[4823]: I1206 06:39:57.431572 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-z8zck"] Dec 06 06:39:57 crc kubenswrapper[4823]: I1206 06:39:57.435843 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-pw4mq" podStartSLOduration=5.575688283 podStartE2EDuration="17.435824136s" podCreationTimestamp="2025-12-06 06:39:40 +0000 UTC" firstStartedPulling="2025-12-06 06:39:40.739841357 +0000 UTC m=+882.025593317" lastFinishedPulling="2025-12-06 06:39:52.59997721 +0000 UTC m=+893.885729170" observedRunningTime="2025-12-06 06:39:57.433271112 +0000 UTC m=+898.719023092" watchObservedRunningTime="2025-12-06 06:39:57.435824136 +0000 UTC m=+898.721576106" Dec 06 06:39:57 crc kubenswrapper[4823]: I1206 06:39:57.565381 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vwpc\" (UniqueName: \"kubernetes.io/projected/7b4d04b6-a05b-413c-9762-1ab5bddd1201-kube-api-access-2vwpc\") pod \"openstack-operator-index-z8zck\" (UID: \"7b4d04b6-a05b-413c-9762-1ab5bddd1201\") " pod="openstack-operators/openstack-operator-index-z8zck" Dec 06 06:39:57 crc kubenswrapper[4823]: I1206 06:39:57.667605 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vwpc\" (UniqueName: \"kubernetes.io/projected/7b4d04b6-a05b-413c-9762-1ab5bddd1201-kube-api-access-2vwpc\") pod \"openstack-operator-index-z8zck\" (UID: \"7b4d04b6-a05b-413c-9762-1ab5bddd1201\") " pod="openstack-operators/openstack-operator-index-z8zck" Dec 06 06:39:57 crc kubenswrapper[4823]: I1206 06:39:57.687784 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vwpc\" (UniqueName: \"kubernetes.io/projected/7b4d04b6-a05b-413c-9762-1ab5bddd1201-kube-api-access-2vwpc\") pod \"openstack-operator-index-z8zck\" (UID: \"7b4d04b6-a05b-413c-9762-1ab5bddd1201\") " pod="openstack-operators/openstack-operator-index-z8zck" Dec 06 06:39:57 crc kubenswrapper[4823]: I1206 06:39:57.723720 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-z8zck" Dec 06 06:39:58 crc kubenswrapper[4823]: I1206 06:39:58.506461 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-z8zck"] Dec 06 06:39:58 crc kubenswrapper[4823]: W1206 06:39:58.513425 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b4d04b6_a05b_413c_9762_1ab5bddd1201.slice/crio-f2d7686ba81aaaf06a1da20ef5212f4a21a201a744b2ed2407921097a3ac6642 WatchSource:0}: Error finding container f2d7686ba81aaaf06a1da20ef5212f4a21a201a744b2ed2407921097a3ac6642: Status 404 returned error can't find the container with id f2d7686ba81aaaf06a1da20ef5212f4a21a201a744b2ed2407921097a3ac6642 Dec 06 06:39:59 crc kubenswrapper[4823]: I1206 06:39:59.404515 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-z8zck" event={"ID":"7b4d04b6-a05b-413c-9762-1ab5bddd1201","Type":"ContainerStarted","Data":"f2d7686ba81aaaf06a1da20ef5212f4a21a201a744b2ed2407921097a3ac6642"} Dec 06 06:40:00 crc kubenswrapper[4823]: I1206 06:40:00.467124 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:40:00 crc kubenswrapper[4823]: I1206 06:40:00.505119 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:40:00 crc kubenswrapper[4823]: I1206 06:40:00.772490 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-z8zck"] Dec 06 06:40:01 crc kubenswrapper[4823]: I1206 06:40:01.261824 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-jg4v8" Dec 06 06:40:01 crc kubenswrapper[4823]: I1206 06:40:01.374567 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-kmmpn"] Dec 06 06:40:01 crc kubenswrapper[4823]: I1206 06:40:01.375316 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kmmpn" Dec 06 06:40:01 crc kubenswrapper[4823]: I1206 06:40:01.391086 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kmmpn"] Dec 06 06:40:01 crc kubenswrapper[4823]: I1206 06:40:01.447535 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbp7f\" (UniqueName: \"kubernetes.io/projected/dfbacef0-81cd-45dd-870f-ca9b9a506529-kube-api-access-rbp7f\") pod \"openstack-operator-index-kmmpn\" (UID: \"dfbacef0-81cd-45dd-870f-ca9b9a506529\") " pod="openstack-operators/openstack-operator-index-kmmpn" Dec 06 06:40:01 crc kubenswrapper[4823]: I1206 06:40:01.550395 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbp7f\" (UniqueName: \"kubernetes.io/projected/dfbacef0-81cd-45dd-870f-ca9b9a506529-kube-api-access-rbp7f\") pod \"openstack-operator-index-kmmpn\" (UID: \"dfbacef0-81cd-45dd-870f-ca9b9a506529\") " pod="openstack-operators/openstack-operator-index-kmmpn" Dec 06 06:40:01 crc kubenswrapper[4823]: I1206 06:40:01.573540 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbp7f\" (UniqueName: \"kubernetes.io/projected/dfbacef0-81cd-45dd-870f-ca9b9a506529-kube-api-access-rbp7f\") pod \"openstack-operator-index-kmmpn\" (UID: \"dfbacef0-81cd-45dd-870f-ca9b9a506529\") " pod="openstack-operators/openstack-operator-index-kmmpn" Dec 06 06:40:01 crc kubenswrapper[4823]: I1206 06:40:01.694528 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kmmpn" Dec 06 06:40:03 crc kubenswrapper[4823]: I1206 06:40:03.348147 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kmmpn"] Dec 06 06:40:03 crc kubenswrapper[4823]: W1206 06:40:03.355457 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfbacef0_81cd_45dd_870f_ca9b9a506529.slice/crio-a46f55153c714d44ce509c676abaf50f2df2d7bd367966ca0ee5d258676401a0 WatchSource:0}: Error finding container a46f55153c714d44ce509c676abaf50f2df2d7bd367966ca0ee5d258676401a0: Status 404 returned error can't find the container with id a46f55153c714d44ce509c676abaf50f2df2d7bd367966ca0ee5d258676401a0 Dec 06 06:40:03 crc kubenswrapper[4823]: I1206 06:40:03.444474 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kmmpn" event={"ID":"dfbacef0-81cd-45dd-870f-ca9b9a506529","Type":"ContainerStarted","Data":"a46f55153c714d44ce509c676abaf50f2df2d7bd367966ca0ee5d258676401a0"} Dec 06 06:40:03 crc kubenswrapper[4823]: I1206 06:40:03.446591 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-z8zck" event={"ID":"7b4d04b6-a05b-413c-9762-1ab5bddd1201","Type":"ContainerStarted","Data":"2c2de1f3330d9c98b3d46876408661f34e462efa3070ff510671c102c9311fcc"} Dec 06 06:40:03 crc kubenswrapper[4823]: I1206 06:40:03.446682 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-z8zck" podUID="7b4d04b6-a05b-413c-9762-1ab5bddd1201" containerName="registry-server" containerID="cri-o://2c2de1f3330d9c98b3d46876408661f34e462efa3070ff510671c102c9311fcc" gracePeriod=2 Dec 06 06:40:03 crc kubenswrapper[4823]: I1206 06:40:03.468939 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-z8zck" podStartSLOduration=1.819584491 podStartE2EDuration="6.468919151s" podCreationTimestamp="2025-12-06 06:39:57 +0000 UTC" firstStartedPulling="2025-12-06 06:39:58.517002846 +0000 UTC m=+899.802754806" lastFinishedPulling="2025-12-06 06:40:03.166337506 +0000 UTC m=+904.452089466" observedRunningTime="2025-12-06 06:40:03.462117595 +0000 UTC m=+904.747869555" watchObservedRunningTime="2025-12-06 06:40:03.468919151 +0000 UTC m=+904.754671111" Dec 06 06:40:03 crc kubenswrapper[4823]: I1206 06:40:03.835646 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-z8zck" Dec 06 06:40:03 crc kubenswrapper[4823]: I1206 06:40:03.984518 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vwpc\" (UniqueName: \"kubernetes.io/projected/7b4d04b6-a05b-413c-9762-1ab5bddd1201-kube-api-access-2vwpc\") pod \"7b4d04b6-a05b-413c-9762-1ab5bddd1201\" (UID: \"7b4d04b6-a05b-413c-9762-1ab5bddd1201\") " Dec 06 06:40:03 crc kubenswrapper[4823]: I1206 06:40:03.990081 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b4d04b6-a05b-413c-9762-1ab5bddd1201-kube-api-access-2vwpc" (OuterVolumeSpecName: "kube-api-access-2vwpc") pod "7b4d04b6-a05b-413c-9762-1ab5bddd1201" (UID: "7b4d04b6-a05b-413c-9762-1ab5bddd1201"). InnerVolumeSpecName "kube-api-access-2vwpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:40:04 crc kubenswrapper[4823]: I1206 06:40:04.085803 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vwpc\" (UniqueName: \"kubernetes.io/projected/7b4d04b6-a05b-413c-9762-1ab5bddd1201-kube-api-access-2vwpc\") on node \"crc\" DevicePath \"\"" Dec 06 06:40:04 crc kubenswrapper[4823]: I1206 06:40:04.454864 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kmmpn" event={"ID":"dfbacef0-81cd-45dd-870f-ca9b9a506529","Type":"ContainerStarted","Data":"e1c64cafaddd8d688ebe221c225a5a95cabdb023294a5e7ea1febfdde0a5667b"} Dec 06 06:40:04 crc kubenswrapper[4823]: I1206 06:40:04.457018 4823 generic.go:334] "Generic (PLEG): container finished" podID="7b4d04b6-a05b-413c-9762-1ab5bddd1201" containerID="2c2de1f3330d9c98b3d46876408661f34e462efa3070ff510671c102c9311fcc" exitCode=0 Dec 06 06:40:04 crc kubenswrapper[4823]: I1206 06:40:04.457059 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-z8zck" event={"ID":"7b4d04b6-a05b-413c-9762-1ab5bddd1201","Type":"ContainerDied","Data":"2c2de1f3330d9c98b3d46876408661f34e462efa3070ff510671c102c9311fcc"} Dec 06 06:40:04 crc kubenswrapper[4823]: I1206 06:40:04.457079 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-z8zck" event={"ID":"7b4d04b6-a05b-413c-9762-1ab5bddd1201","Type":"ContainerDied","Data":"f2d7686ba81aaaf06a1da20ef5212f4a21a201a744b2ed2407921097a3ac6642"} Dec 06 06:40:04 crc kubenswrapper[4823]: I1206 06:40:04.457092 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-z8zck" Dec 06 06:40:04 crc kubenswrapper[4823]: I1206 06:40:04.457100 4823 scope.go:117] "RemoveContainer" containerID="2c2de1f3330d9c98b3d46876408661f34e462efa3070ff510671c102c9311fcc" Dec 06 06:40:04 crc kubenswrapper[4823]: I1206 06:40:04.476474 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-kmmpn" podStartSLOduration=3.397310361 podStartE2EDuration="3.476453725s" podCreationTimestamp="2025-12-06 06:40:01 +0000 UTC" firstStartedPulling="2025-12-06 06:40:03.362548831 +0000 UTC m=+904.648300791" lastFinishedPulling="2025-12-06 06:40:03.441692195 +0000 UTC m=+904.727444155" observedRunningTime="2025-12-06 06:40:04.469551386 +0000 UTC m=+905.755303346" watchObservedRunningTime="2025-12-06 06:40:04.476453725 +0000 UTC m=+905.762205685" Dec 06 06:40:04 crc kubenswrapper[4823]: I1206 06:40:04.487450 4823 scope.go:117] "RemoveContainer" containerID="2c2de1f3330d9c98b3d46876408661f34e462efa3070ff510671c102c9311fcc" Dec 06 06:40:04 crc kubenswrapper[4823]: E1206 06:40:04.488332 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c2de1f3330d9c98b3d46876408661f34e462efa3070ff510671c102c9311fcc\": container with ID starting with 2c2de1f3330d9c98b3d46876408661f34e462efa3070ff510671c102c9311fcc not found: ID does not exist" containerID="2c2de1f3330d9c98b3d46876408661f34e462efa3070ff510671c102c9311fcc" Dec 06 06:40:04 crc kubenswrapper[4823]: I1206 06:40:04.488398 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c2de1f3330d9c98b3d46876408661f34e462efa3070ff510671c102c9311fcc"} err="failed to get container status \"2c2de1f3330d9c98b3d46876408661f34e462efa3070ff510671c102c9311fcc\": rpc error: code = NotFound desc = could not find container \"2c2de1f3330d9c98b3d46876408661f34e462efa3070ff510671c102c9311fcc\": container with ID starting with 2c2de1f3330d9c98b3d46876408661f34e462efa3070ff510671c102c9311fcc not found: ID does not exist" Dec 06 06:40:04 crc kubenswrapper[4823]: I1206 06:40:04.488435 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-z8zck"] Dec 06 06:40:04 crc kubenswrapper[4823]: I1206 06:40:04.493401 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-z8zck"] Dec 06 06:40:05 crc kubenswrapper[4823]: I1206 06:40:05.149430 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b4d04b6-a05b-413c-9762-1ab5bddd1201" path="/var/lib/kubelet/pods/7b4d04b6-a05b-413c-9762-1ab5bddd1201/volumes" Dec 06 06:40:10 crc kubenswrapper[4823]: I1206 06:40:10.472191 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-pw4mq" Dec 06 06:40:10 crc kubenswrapper[4823]: I1206 06:40:10.656707 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lsjzk" Dec 06 06:40:11 crc kubenswrapper[4823]: I1206 06:40:11.695134 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-kmmpn" Dec 06 06:40:11 crc kubenswrapper[4823]: I1206 06:40:11.695234 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-kmmpn" Dec 06 06:40:11 crc kubenswrapper[4823]: I1206 06:40:11.723568 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-kmmpn" Dec 06 06:40:12 crc kubenswrapper[4823]: I1206 06:40:12.533938 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-kmmpn" Dec 06 06:40:18 crc kubenswrapper[4823]: I1206 06:40:18.884777 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw"] Dec 06 06:40:18 crc kubenswrapper[4823]: E1206 06:40:18.885294 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b4d04b6-a05b-413c-9762-1ab5bddd1201" containerName="registry-server" Dec 06 06:40:18 crc kubenswrapper[4823]: I1206 06:40:18.885308 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b4d04b6-a05b-413c-9762-1ab5bddd1201" containerName="registry-server" Dec 06 06:40:18 crc kubenswrapper[4823]: I1206 06:40:18.885440 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b4d04b6-a05b-413c-9762-1ab5bddd1201" containerName="registry-server" Dec 06 06:40:18 crc kubenswrapper[4823]: I1206 06:40:18.886280 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw" Dec 06 06:40:18 crc kubenswrapper[4823]: I1206 06:40:18.888754 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-vzws4" Dec 06 06:40:18 crc kubenswrapper[4823]: I1206 06:40:18.894374 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw"] Dec 06 06:40:18 crc kubenswrapper[4823]: I1206 06:40:18.993638 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6933e1c2-852c-4eab-9956-c93bc9027c9d-util\") pod \"230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw\" (UID: \"6933e1c2-852c-4eab-9956-c93bc9027c9d\") " pod="openstack-operators/230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw" Dec 06 06:40:18 crc kubenswrapper[4823]: I1206 06:40:18.993748 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6933e1c2-852c-4eab-9956-c93bc9027c9d-bundle\") pod \"230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw\" (UID: \"6933e1c2-852c-4eab-9956-c93bc9027c9d\") " pod="openstack-operators/230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw" Dec 06 06:40:18 crc kubenswrapper[4823]: I1206 06:40:18.993777 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqs24\" (UniqueName: \"kubernetes.io/projected/6933e1c2-852c-4eab-9956-c93bc9027c9d-kube-api-access-mqs24\") pod \"230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw\" (UID: \"6933e1c2-852c-4eab-9956-c93bc9027c9d\") " pod="openstack-operators/230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw" Dec 06 06:40:19 crc kubenswrapper[4823]: I1206 06:40:19.095257 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6933e1c2-852c-4eab-9956-c93bc9027c9d-util\") pod \"230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw\" (UID: \"6933e1c2-852c-4eab-9956-c93bc9027c9d\") " pod="openstack-operators/230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw" Dec 06 06:40:19 crc kubenswrapper[4823]: I1206 06:40:19.095349 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6933e1c2-852c-4eab-9956-c93bc9027c9d-bundle\") pod \"230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw\" (UID: \"6933e1c2-852c-4eab-9956-c93bc9027c9d\") " pod="openstack-operators/230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw" Dec 06 06:40:19 crc kubenswrapper[4823]: I1206 06:40:19.095382 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqs24\" (UniqueName: \"kubernetes.io/projected/6933e1c2-852c-4eab-9956-c93bc9027c9d-kube-api-access-mqs24\") pod \"230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw\" (UID: \"6933e1c2-852c-4eab-9956-c93bc9027c9d\") " pod="openstack-operators/230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw" Dec 06 06:40:19 crc kubenswrapper[4823]: I1206 06:40:19.096006 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6933e1c2-852c-4eab-9956-c93bc9027c9d-bundle\") pod \"230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw\" (UID: \"6933e1c2-852c-4eab-9956-c93bc9027c9d\") " pod="openstack-operators/230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw" Dec 06 06:40:19 crc kubenswrapper[4823]: I1206 06:40:19.096051 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6933e1c2-852c-4eab-9956-c93bc9027c9d-util\") pod \"230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw\" (UID: \"6933e1c2-852c-4eab-9956-c93bc9027c9d\") " pod="openstack-operators/230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw" Dec 06 06:40:19 crc kubenswrapper[4823]: I1206 06:40:19.119572 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqs24\" (UniqueName: \"kubernetes.io/projected/6933e1c2-852c-4eab-9956-c93bc9027c9d-kube-api-access-mqs24\") pod \"230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw\" (UID: \"6933e1c2-852c-4eab-9956-c93bc9027c9d\") " pod="openstack-operators/230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw" Dec 06 06:40:19 crc kubenswrapper[4823]: I1206 06:40:19.206255 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw" Dec 06 06:40:19 crc kubenswrapper[4823]: I1206 06:40:19.670302 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw"] Dec 06 06:40:19 crc kubenswrapper[4823]: W1206 06:40:19.674240 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6933e1c2_852c_4eab_9956_c93bc9027c9d.slice/crio-ac1067fb41697e5b40b12b157fe317c8b61abfe3213031ab777e2a575e5bc609 WatchSource:0}: Error finding container ac1067fb41697e5b40b12b157fe317c8b61abfe3213031ab777e2a575e5bc609: Status 404 returned error can't find the container with id ac1067fb41697e5b40b12b157fe317c8b61abfe3213031ab777e2a575e5bc609 Dec 06 06:40:20 crc kubenswrapper[4823]: E1206 06:40:20.072886 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6933e1c2_852c_4eab_9956_c93bc9027c9d.slice/crio-c268ce7d52b4f138c914f03a0dc6d65ed89ce7d4a6623b7e58e9ee3b2d19b263.scope\": RecentStats: unable to find data in memory cache]" Dec 06 06:40:20 crc kubenswrapper[4823]: I1206 06:40:20.558137 4823 generic.go:334] "Generic (PLEG): container finished" podID="6933e1c2-852c-4eab-9956-c93bc9027c9d" containerID="c268ce7d52b4f138c914f03a0dc6d65ed89ce7d4a6623b7e58e9ee3b2d19b263" exitCode=0 Dec 06 06:40:20 crc kubenswrapper[4823]: I1206 06:40:20.558184 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw" event={"ID":"6933e1c2-852c-4eab-9956-c93bc9027c9d","Type":"ContainerDied","Data":"c268ce7d52b4f138c914f03a0dc6d65ed89ce7d4a6623b7e58e9ee3b2d19b263"} Dec 06 06:40:20 crc kubenswrapper[4823]: I1206 06:40:20.558255 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw" event={"ID":"6933e1c2-852c-4eab-9956-c93bc9027c9d","Type":"ContainerStarted","Data":"ac1067fb41697e5b40b12b157fe317c8b61abfe3213031ab777e2a575e5bc609"} Dec 06 06:40:21 crc kubenswrapper[4823]: I1206 06:40:21.565605 4823 generic.go:334] "Generic (PLEG): container finished" podID="6933e1c2-852c-4eab-9956-c93bc9027c9d" containerID="2f1197960e9b4719c9249828ba44a51b55d3fa98f1626507e6362352d5ef91db" exitCode=0 Dec 06 06:40:21 crc kubenswrapper[4823]: I1206 06:40:21.565704 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw" event={"ID":"6933e1c2-852c-4eab-9956-c93bc9027c9d","Type":"ContainerDied","Data":"2f1197960e9b4719c9249828ba44a51b55d3fa98f1626507e6362352d5ef91db"} Dec 06 06:40:24 crc kubenswrapper[4823]: I1206 06:40:24.586923 4823 generic.go:334] "Generic (PLEG): container finished" podID="6933e1c2-852c-4eab-9956-c93bc9027c9d" containerID="2770d90e37f64e877ad7ac15cd53610df23f58882e701ebdb4fc0ce47c6e9a44" exitCode=0 Dec 06 06:40:24 crc kubenswrapper[4823]: I1206 06:40:24.586987 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw" event={"ID":"6933e1c2-852c-4eab-9956-c93bc9027c9d","Type":"ContainerDied","Data":"2770d90e37f64e877ad7ac15cd53610df23f58882e701ebdb4fc0ce47c6e9a44"} Dec 06 06:40:25 crc kubenswrapper[4823]: I1206 06:40:25.877816 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw" Dec 06 06:40:25 crc kubenswrapper[4823]: I1206 06:40:25.989947 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqs24\" (UniqueName: \"kubernetes.io/projected/6933e1c2-852c-4eab-9956-c93bc9027c9d-kube-api-access-mqs24\") pod \"6933e1c2-852c-4eab-9956-c93bc9027c9d\" (UID: \"6933e1c2-852c-4eab-9956-c93bc9027c9d\") " Dec 06 06:40:25 crc kubenswrapper[4823]: I1206 06:40:25.990063 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6933e1c2-852c-4eab-9956-c93bc9027c9d-bundle\") pod \"6933e1c2-852c-4eab-9956-c93bc9027c9d\" (UID: \"6933e1c2-852c-4eab-9956-c93bc9027c9d\") " Dec 06 06:40:25 crc kubenswrapper[4823]: I1206 06:40:25.990142 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6933e1c2-852c-4eab-9956-c93bc9027c9d-util\") pod \"6933e1c2-852c-4eab-9956-c93bc9027c9d\" (UID: \"6933e1c2-852c-4eab-9956-c93bc9027c9d\") " Dec 06 06:40:25 crc kubenswrapper[4823]: I1206 06:40:25.990835 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6933e1c2-852c-4eab-9956-c93bc9027c9d-bundle" (OuterVolumeSpecName: "bundle") pod "6933e1c2-852c-4eab-9956-c93bc9027c9d" (UID: "6933e1c2-852c-4eab-9956-c93bc9027c9d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:40:25 crc kubenswrapper[4823]: I1206 06:40:25.995422 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6933e1c2-852c-4eab-9956-c93bc9027c9d-kube-api-access-mqs24" (OuterVolumeSpecName: "kube-api-access-mqs24") pod "6933e1c2-852c-4eab-9956-c93bc9027c9d" (UID: "6933e1c2-852c-4eab-9956-c93bc9027c9d"). InnerVolumeSpecName "kube-api-access-mqs24". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:40:26 crc kubenswrapper[4823]: I1206 06:40:26.002037 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6933e1c2-852c-4eab-9956-c93bc9027c9d-util" (OuterVolumeSpecName: "util") pod "6933e1c2-852c-4eab-9956-c93bc9027c9d" (UID: "6933e1c2-852c-4eab-9956-c93bc9027c9d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:40:26 crc kubenswrapper[4823]: I1206 06:40:26.091309 4823 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6933e1c2-852c-4eab-9956-c93bc9027c9d-util\") on node \"crc\" DevicePath \"\"" Dec 06 06:40:26 crc kubenswrapper[4823]: I1206 06:40:26.091349 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqs24\" (UniqueName: \"kubernetes.io/projected/6933e1c2-852c-4eab-9956-c93bc9027c9d-kube-api-access-mqs24\") on node \"crc\" DevicePath \"\"" Dec 06 06:40:26 crc kubenswrapper[4823]: I1206 06:40:26.091361 4823 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6933e1c2-852c-4eab-9956-c93bc9027c9d-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:40:26 crc kubenswrapper[4823]: I1206 06:40:26.600534 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw" event={"ID":"6933e1c2-852c-4eab-9956-c93bc9027c9d","Type":"ContainerDied","Data":"ac1067fb41697e5b40b12b157fe317c8b61abfe3213031ab777e2a575e5bc609"} Dec 06 06:40:26 crc kubenswrapper[4823]: I1206 06:40:26.600592 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac1067fb41697e5b40b12b157fe317c8b61abfe3213031ab777e2a575e5bc609" Dec 06 06:40:26 crc kubenswrapper[4823]: I1206 06:40:26.600978 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw" Dec 06 06:40:31 crc kubenswrapper[4823]: I1206 06:40:31.712875 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-f4b959fdf-fzm4b"] Dec 06 06:40:31 crc kubenswrapper[4823]: E1206 06:40:31.713703 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6933e1c2-852c-4eab-9956-c93bc9027c9d" containerName="extract" Dec 06 06:40:31 crc kubenswrapper[4823]: I1206 06:40:31.713719 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6933e1c2-852c-4eab-9956-c93bc9027c9d" containerName="extract" Dec 06 06:40:31 crc kubenswrapper[4823]: E1206 06:40:31.713734 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6933e1c2-852c-4eab-9956-c93bc9027c9d" containerName="pull" Dec 06 06:40:31 crc kubenswrapper[4823]: I1206 06:40:31.713742 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6933e1c2-852c-4eab-9956-c93bc9027c9d" containerName="pull" Dec 06 06:40:31 crc kubenswrapper[4823]: E1206 06:40:31.713767 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6933e1c2-852c-4eab-9956-c93bc9027c9d" containerName="util" Dec 06 06:40:31 crc kubenswrapper[4823]: I1206 06:40:31.713774 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6933e1c2-852c-4eab-9956-c93bc9027c9d" containerName="util" Dec 06 06:40:31 crc kubenswrapper[4823]: I1206 06:40:31.713914 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="6933e1c2-852c-4eab-9956-c93bc9027c9d" containerName="extract" Dec 06 06:40:31 crc kubenswrapper[4823]: I1206 06:40:31.714504 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-f4b959fdf-fzm4b" Dec 06 06:40:31 crc kubenswrapper[4823]: I1206 06:40:31.718038 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-fs6mp" Dec 06 06:40:31 crc kubenswrapper[4823]: I1206 06:40:31.733169 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-f4b959fdf-fzm4b"] Dec 06 06:40:31 crc kubenswrapper[4823]: I1206 06:40:31.768355 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4nrt\" (UniqueName: \"kubernetes.io/projected/f3b9d10e-c904-4cef-aad2-1d9428fc198d-kube-api-access-h4nrt\") pod \"openstack-operator-controller-operator-f4b959fdf-fzm4b\" (UID: \"f3b9d10e-c904-4cef-aad2-1d9428fc198d\") " pod="openstack-operators/openstack-operator-controller-operator-f4b959fdf-fzm4b" Dec 06 06:40:31 crc kubenswrapper[4823]: I1206 06:40:31.870123 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4nrt\" (UniqueName: \"kubernetes.io/projected/f3b9d10e-c904-4cef-aad2-1d9428fc198d-kube-api-access-h4nrt\") pod \"openstack-operator-controller-operator-f4b959fdf-fzm4b\" (UID: \"f3b9d10e-c904-4cef-aad2-1d9428fc198d\") " pod="openstack-operators/openstack-operator-controller-operator-f4b959fdf-fzm4b" Dec 06 06:40:31 crc kubenswrapper[4823]: I1206 06:40:31.889311 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4nrt\" (UniqueName: \"kubernetes.io/projected/f3b9d10e-c904-4cef-aad2-1d9428fc198d-kube-api-access-h4nrt\") pod \"openstack-operator-controller-operator-f4b959fdf-fzm4b\" (UID: \"f3b9d10e-c904-4cef-aad2-1d9428fc198d\") " pod="openstack-operators/openstack-operator-controller-operator-f4b959fdf-fzm4b" Dec 06 06:40:32 crc kubenswrapper[4823]: I1206 06:40:32.044079 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-f4b959fdf-fzm4b" Dec 06 06:40:32 crc kubenswrapper[4823]: I1206 06:40:32.284477 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-f4b959fdf-fzm4b"] Dec 06 06:40:32 crc kubenswrapper[4823]: I1206 06:40:32.637275 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-f4b959fdf-fzm4b" event={"ID":"f3b9d10e-c904-4cef-aad2-1d9428fc198d","Type":"ContainerStarted","Data":"d490a78e9ff2aca8ea8b4861878ae0f67a4558696dafe4c9bd7101ce8c7310b5"} Dec 06 06:40:36 crc kubenswrapper[4823]: I1206 06:40:36.051908 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:40:36 crc kubenswrapper[4823]: I1206 06:40:36.052230 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:40:39 crc kubenswrapper[4823]: I1206 06:40:39.765618 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-f4b959fdf-fzm4b" event={"ID":"f3b9d10e-c904-4cef-aad2-1d9428fc198d","Type":"ContainerStarted","Data":"ecf145e1c2aba7ab200f5550b0f3223fb1819c756a938c76307d0563bff18c0f"} Dec 06 06:40:39 crc kubenswrapper[4823]: I1206 06:40:39.766211 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-f4b959fdf-fzm4b" Dec 06 06:40:39 crc kubenswrapper[4823]: I1206 06:40:39.800621 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-f4b959fdf-fzm4b" podStartSLOduration=1.794675354 podStartE2EDuration="8.800593242s" podCreationTimestamp="2025-12-06 06:40:31 +0000 UTC" firstStartedPulling="2025-12-06 06:40:32.300427726 +0000 UTC m=+933.586179686" lastFinishedPulling="2025-12-06 06:40:39.306345614 +0000 UTC m=+940.592097574" observedRunningTime="2025-12-06 06:40:39.796122743 +0000 UTC m=+941.081874733" watchObservedRunningTime="2025-12-06 06:40:39.800593242 +0000 UTC m=+941.086345202" Dec 06 06:40:52 crc kubenswrapper[4823]: I1206 06:40:52.046764 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-f4b959fdf-fzm4b" Dec 06 06:40:53 crc kubenswrapper[4823]: I1206 06:40:53.634277 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g7lsc"] Dec 06 06:40:53 crc kubenswrapper[4823]: I1206 06:40:53.635815 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g7lsc" Dec 06 06:40:53 crc kubenswrapper[4823]: I1206 06:40:53.646642 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g7lsc"] Dec 06 06:40:53 crc kubenswrapper[4823]: I1206 06:40:53.832461 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jhjn\" (UniqueName: \"kubernetes.io/projected/121d2e06-3a8d-402e-8ea0-0d4513dd7f9c-kube-api-access-5jhjn\") pod \"redhat-marketplace-g7lsc\" (UID: \"121d2e06-3a8d-402e-8ea0-0d4513dd7f9c\") " pod="openshift-marketplace/redhat-marketplace-g7lsc" Dec 06 06:40:53 crc kubenswrapper[4823]: I1206 06:40:53.832543 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/121d2e06-3a8d-402e-8ea0-0d4513dd7f9c-catalog-content\") pod \"redhat-marketplace-g7lsc\" (UID: \"121d2e06-3a8d-402e-8ea0-0d4513dd7f9c\") " pod="openshift-marketplace/redhat-marketplace-g7lsc" Dec 06 06:40:53 crc kubenswrapper[4823]: I1206 06:40:53.832601 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/121d2e06-3a8d-402e-8ea0-0d4513dd7f9c-utilities\") pod \"redhat-marketplace-g7lsc\" (UID: \"121d2e06-3a8d-402e-8ea0-0d4513dd7f9c\") " pod="openshift-marketplace/redhat-marketplace-g7lsc" Dec 06 06:40:53 crc kubenswrapper[4823]: I1206 06:40:53.934074 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/121d2e06-3a8d-402e-8ea0-0d4513dd7f9c-utilities\") pod \"redhat-marketplace-g7lsc\" (UID: \"121d2e06-3a8d-402e-8ea0-0d4513dd7f9c\") " pod="openshift-marketplace/redhat-marketplace-g7lsc" Dec 06 06:40:53 crc kubenswrapper[4823]: I1206 06:40:53.934165 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jhjn\" (UniqueName: \"kubernetes.io/projected/121d2e06-3a8d-402e-8ea0-0d4513dd7f9c-kube-api-access-5jhjn\") pod \"redhat-marketplace-g7lsc\" (UID: \"121d2e06-3a8d-402e-8ea0-0d4513dd7f9c\") " pod="openshift-marketplace/redhat-marketplace-g7lsc" Dec 06 06:40:53 crc kubenswrapper[4823]: I1206 06:40:53.934217 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/121d2e06-3a8d-402e-8ea0-0d4513dd7f9c-catalog-content\") pod \"redhat-marketplace-g7lsc\" (UID: \"121d2e06-3a8d-402e-8ea0-0d4513dd7f9c\") " pod="openshift-marketplace/redhat-marketplace-g7lsc" Dec 06 06:40:53 crc kubenswrapper[4823]: I1206 06:40:53.934801 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/121d2e06-3a8d-402e-8ea0-0d4513dd7f9c-utilities\") pod \"redhat-marketplace-g7lsc\" (UID: \"121d2e06-3a8d-402e-8ea0-0d4513dd7f9c\") " pod="openshift-marketplace/redhat-marketplace-g7lsc" Dec 06 06:40:53 crc kubenswrapper[4823]: I1206 06:40:53.934835 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/121d2e06-3a8d-402e-8ea0-0d4513dd7f9c-catalog-content\") pod \"redhat-marketplace-g7lsc\" (UID: \"121d2e06-3a8d-402e-8ea0-0d4513dd7f9c\") " pod="openshift-marketplace/redhat-marketplace-g7lsc" Dec 06 06:40:53 crc kubenswrapper[4823]: I1206 06:40:53.958859 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jhjn\" (UniqueName: \"kubernetes.io/projected/121d2e06-3a8d-402e-8ea0-0d4513dd7f9c-kube-api-access-5jhjn\") pod \"redhat-marketplace-g7lsc\" (UID: \"121d2e06-3a8d-402e-8ea0-0d4513dd7f9c\") " pod="openshift-marketplace/redhat-marketplace-g7lsc" Dec 06 06:40:54 crc kubenswrapper[4823]: I1206 06:40:54.254554 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g7lsc" Dec 06 06:40:54 crc kubenswrapper[4823]: I1206 06:40:54.835340 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g7lsc"] Dec 06 06:40:54 crc kubenswrapper[4823]: I1206 06:40:54.859909 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7lsc" event={"ID":"121d2e06-3a8d-402e-8ea0-0d4513dd7f9c","Type":"ContainerStarted","Data":"2befe1cbf941a557ed40087fd7f9a6009b0da1f29618b5d14ad8bd63fd7cf321"} Dec 06 06:40:56 crc kubenswrapper[4823]: I1206 06:40:56.876004 4823 generic.go:334] "Generic (PLEG): container finished" podID="121d2e06-3a8d-402e-8ea0-0d4513dd7f9c" containerID="02f77e65d398a63d5515eb4c815c91ab2ff03227d329fb2b16272bfbe11edd80" exitCode=0 Dec 06 06:40:56 crc kubenswrapper[4823]: I1206 06:40:56.876179 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7lsc" event={"ID":"121d2e06-3a8d-402e-8ea0-0d4513dd7f9c","Type":"ContainerDied","Data":"02f77e65d398a63d5515eb4c815c91ab2ff03227d329fb2b16272bfbe11edd80"} Dec 06 06:40:56 crc kubenswrapper[4823]: I1206 06:40:56.879289 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 06:40:58 crc kubenswrapper[4823]: I1206 06:40:58.902738 4823 generic.go:334] "Generic (PLEG): container finished" podID="121d2e06-3a8d-402e-8ea0-0d4513dd7f9c" containerID="5c662d4f8591594088310f6e4e1eddfd21b00f9549fd7e372d5b2ec23d718973" exitCode=0 Dec 06 06:40:58 crc kubenswrapper[4823]: I1206 06:40:58.902804 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7lsc" event={"ID":"121d2e06-3a8d-402e-8ea0-0d4513dd7f9c","Type":"ContainerDied","Data":"5c662d4f8591594088310f6e4e1eddfd21b00f9549fd7e372d5b2ec23d718973"} Dec 06 06:40:59 crc kubenswrapper[4823]: I1206 06:40:59.910259 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7lsc" event={"ID":"121d2e06-3a8d-402e-8ea0-0d4513dd7f9c","Type":"ContainerStarted","Data":"480447b44a46288255f9d2f330329daf35e9f79d96878a094e1996bba7aaace2"} Dec 06 06:40:59 crc kubenswrapper[4823]: I1206 06:40:59.944182 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g7lsc" podStartSLOduration=4.191625094 podStartE2EDuration="6.944161982s" podCreationTimestamp="2025-12-06 06:40:53 +0000 UTC" firstStartedPulling="2025-12-06 06:40:56.879077443 +0000 UTC m=+958.164829403" lastFinishedPulling="2025-12-06 06:40:59.631614331 +0000 UTC m=+960.917366291" observedRunningTime="2025-12-06 06:40:59.941504696 +0000 UTC m=+961.227256666" watchObservedRunningTime="2025-12-06 06:40:59.944161982 +0000 UTC m=+961.229913942" Dec 06 06:41:04 crc kubenswrapper[4823]: I1206 06:41:04.256603 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g7lsc" Dec 06 06:41:04 crc kubenswrapper[4823]: I1206 06:41:04.257172 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g7lsc" Dec 06 06:41:04 crc kubenswrapper[4823]: I1206 06:41:04.302817 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g7lsc" Dec 06 06:41:04 crc kubenswrapper[4823]: I1206 06:41:04.985782 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g7lsc" Dec 06 06:41:06 crc kubenswrapper[4823]: I1206 06:41:06.052112 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:41:06 crc kubenswrapper[4823]: I1206 06:41:06.052182 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:41:06 crc kubenswrapper[4823]: I1206 06:41:06.625887 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g7lsc"] Dec 06 06:41:06 crc kubenswrapper[4823]: I1206 06:41:06.954923 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g7lsc" podUID="121d2e06-3a8d-402e-8ea0-0d4513dd7f9c" containerName="registry-server" containerID="cri-o://480447b44a46288255f9d2f330329daf35e9f79d96878a094e1996bba7aaace2" gracePeriod=2 Dec 06 06:41:08 crc kubenswrapper[4823]: I1206 06:41:08.970457 4823 generic.go:334] "Generic (PLEG): container finished" podID="121d2e06-3a8d-402e-8ea0-0d4513dd7f9c" containerID="480447b44a46288255f9d2f330329daf35e9f79d96878a094e1996bba7aaace2" exitCode=0 Dec 06 06:41:08 crc kubenswrapper[4823]: I1206 06:41:08.970545 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7lsc" event={"ID":"121d2e06-3a8d-402e-8ea0-0d4513dd7f9c","Type":"ContainerDied","Data":"480447b44a46288255f9d2f330329daf35e9f79d96878a094e1996bba7aaace2"} Dec 06 06:41:09 crc kubenswrapper[4823]: I1206 06:41:09.643081 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g7lsc" Dec 06 06:41:09 crc kubenswrapper[4823]: I1206 06:41:09.814148 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/121d2e06-3a8d-402e-8ea0-0d4513dd7f9c-catalog-content\") pod \"121d2e06-3a8d-402e-8ea0-0d4513dd7f9c\" (UID: \"121d2e06-3a8d-402e-8ea0-0d4513dd7f9c\") " Dec 06 06:41:09 crc kubenswrapper[4823]: I1206 06:41:09.814289 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jhjn\" (UniqueName: \"kubernetes.io/projected/121d2e06-3a8d-402e-8ea0-0d4513dd7f9c-kube-api-access-5jhjn\") pod \"121d2e06-3a8d-402e-8ea0-0d4513dd7f9c\" (UID: \"121d2e06-3a8d-402e-8ea0-0d4513dd7f9c\") " Dec 06 06:41:09 crc kubenswrapper[4823]: I1206 06:41:09.814347 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/121d2e06-3a8d-402e-8ea0-0d4513dd7f9c-utilities\") pod \"121d2e06-3a8d-402e-8ea0-0d4513dd7f9c\" (UID: \"121d2e06-3a8d-402e-8ea0-0d4513dd7f9c\") " Dec 06 06:41:09 crc kubenswrapper[4823]: I1206 06:41:09.815459 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/121d2e06-3a8d-402e-8ea0-0d4513dd7f9c-utilities" (OuterVolumeSpecName: "utilities") pod "121d2e06-3a8d-402e-8ea0-0d4513dd7f9c" (UID: "121d2e06-3a8d-402e-8ea0-0d4513dd7f9c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:41:09 crc kubenswrapper[4823]: I1206 06:41:09.820231 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/121d2e06-3a8d-402e-8ea0-0d4513dd7f9c-kube-api-access-5jhjn" (OuterVolumeSpecName: "kube-api-access-5jhjn") pod "121d2e06-3a8d-402e-8ea0-0d4513dd7f9c" (UID: "121d2e06-3a8d-402e-8ea0-0d4513dd7f9c"). InnerVolumeSpecName "kube-api-access-5jhjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:41:09 crc kubenswrapper[4823]: I1206 06:41:09.834315 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/121d2e06-3a8d-402e-8ea0-0d4513dd7f9c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "121d2e06-3a8d-402e-8ea0-0d4513dd7f9c" (UID: "121d2e06-3a8d-402e-8ea0-0d4513dd7f9c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:41:09 crc kubenswrapper[4823]: I1206 06:41:09.915387 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/121d2e06-3a8d-402e-8ea0-0d4513dd7f9c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:41:09 crc kubenswrapper[4823]: I1206 06:41:09.915418 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jhjn\" (UniqueName: \"kubernetes.io/projected/121d2e06-3a8d-402e-8ea0-0d4513dd7f9c-kube-api-access-5jhjn\") on node \"crc\" DevicePath \"\"" Dec 06 06:41:09 crc kubenswrapper[4823]: I1206 06:41:09.915429 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/121d2e06-3a8d-402e-8ea0-0d4513dd7f9c-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:41:09 crc kubenswrapper[4823]: I1206 06:41:09.978834 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7lsc" event={"ID":"121d2e06-3a8d-402e-8ea0-0d4513dd7f9c","Type":"ContainerDied","Data":"2befe1cbf941a557ed40087fd7f9a6009b0da1f29618b5d14ad8bd63fd7cf321"} Dec 06 06:41:09 crc kubenswrapper[4823]: I1206 06:41:09.978874 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g7lsc" Dec 06 06:41:09 crc kubenswrapper[4823]: I1206 06:41:09.978895 4823 scope.go:117] "RemoveContainer" containerID="480447b44a46288255f9d2f330329daf35e9f79d96878a094e1996bba7aaace2" Dec 06 06:41:09 crc kubenswrapper[4823]: I1206 06:41:09.998382 4823 scope.go:117] "RemoveContainer" containerID="5c662d4f8591594088310f6e4e1eddfd21b00f9549fd7e372d5b2ec23d718973" Dec 06 06:41:10 crc kubenswrapper[4823]: I1206 06:41:10.008894 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g7lsc"] Dec 06 06:41:10 crc kubenswrapper[4823]: I1206 06:41:10.015848 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g7lsc"] Dec 06 06:41:10 crc kubenswrapper[4823]: I1206 06:41:10.032822 4823 scope.go:117] "RemoveContainer" containerID="02f77e65d398a63d5515eb4c815c91ab2ff03227d329fb2b16272bfbe11edd80" Dec 06 06:41:11 crc kubenswrapper[4823]: I1206 06:41:11.149639 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="121d2e06-3a8d-402e-8ea0-0d4513dd7f9c" path="/var/lib/kubelet/pods/121d2e06-3a8d-402e-8ea0-0d4513dd7f9c/volumes" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.734334 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-4xsdc"] Dec 06 06:41:19 crc kubenswrapper[4823]: E1206 06:41:19.735377 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="121d2e06-3a8d-402e-8ea0-0d4513dd7f9c" containerName="extract-content" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.735393 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="121d2e06-3a8d-402e-8ea0-0d4513dd7f9c" containerName="extract-content" Dec 06 06:41:19 crc kubenswrapper[4823]: E1206 06:41:19.735405 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="121d2e06-3a8d-402e-8ea0-0d4513dd7f9c" containerName="registry-server" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.735411 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="121d2e06-3a8d-402e-8ea0-0d4513dd7f9c" containerName="registry-server" Dec 06 06:41:19 crc kubenswrapper[4823]: E1206 06:41:19.735430 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="121d2e06-3a8d-402e-8ea0-0d4513dd7f9c" containerName="extract-utilities" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.735438 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="121d2e06-3a8d-402e-8ea0-0d4513dd7f9c" containerName="extract-utilities" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.735562 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="121d2e06-3a8d-402e-8ea0-0d4513dd7f9c" containerName="registry-server" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.736460 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4xsdc" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.743441 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-8h8w8" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.754285 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-5jsvb"] Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.755734 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-5jsvb" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.757636 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-xqhrz" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.761966 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-4xsdc"] Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.769765 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-5jsvb"] Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.782250 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-9r9sg"] Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.783617 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-9r9sg" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.786616 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-b9wws" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.804280 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-9r9sg"] Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.829468 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2dn7\" (UniqueName: \"kubernetes.io/projected/af7acc94-0229-4055-b0ea-e5646c927e7a-kube-api-access-k2dn7\") pod \"barbican-operator-controller-manager-7d9dfd778-4xsdc\" (UID: \"af7acc94-0229-4055-b0ea-e5646c927e7a\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4xsdc" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.829621 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxsnr\" (UniqueName: \"kubernetes.io/projected/3ade4605-3b89-4c0e-a05c-b0d7d6ee66bf-kube-api-access-hxsnr\") pod \"cinder-operator-controller-manager-859b6ccc6-5jsvb\" (UID: \"3ade4605-3b89-4c0e-a05c-b0d7d6ee66bf\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-5jsvb" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.849115 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-mwbpk"] Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.850437 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-mwbpk" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.853564 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-hwtmh" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.873197 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-mwbpk"] Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.888256 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-4fdh4"] Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.889526 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-4fdh4" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.894848 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-psv68" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.900030 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-4fdh4"] Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.923721 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-nggsj"] Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.925742 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-nggsj" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.931645 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-5fg74" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.931809 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2dn7\" (UniqueName: \"kubernetes.io/projected/af7acc94-0229-4055-b0ea-e5646c927e7a-kube-api-access-k2dn7\") pod \"barbican-operator-controller-manager-7d9dfd778-4xsdc\" (UID: \"af7acc94-0229-4055-b0ea-e5646c927e7a\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4xsdc" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.931844 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-478v2\" (UniqueName: \"kubernetes.io/projected/9bc807b4-b176-4249-9610-b4c92f99fb0b-kube-api-access-478v2\") pod \"glance-operator-controller-manager-77987cd8cd-mwbpk\" (UID: \"9bc807b4-b176-4249-9610-b4c92f99fb0b\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-mwbpk" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.931872 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg557\" (UniqueName: \"kubernetes.io/projected/69d7c5b3-6bb3-4545-bcf3-9613f979646d-kube-api-access-fg557\") pod \"designate-operator-controller-manager-78b4bc895b-9r9sg\" (UID: \"69d7c5b3-6bb3-4545-bcf3-9613f979646d\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-9r9sg" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.931926 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxsnr\" (UniqueName: \"kubernetes.io/projected/3ade4605-3b89-4c0e-a05c-b0d7d6ee66bf-kube-api-access-hxsnr\") pod \"cinder-operator-controller-manager-859b6ccc6-5jsvb\" (UID: \"3ade4605-3b89-4c0e-a05c-b0d7d6ee66bf\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-5jsvb" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.946793 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-nggsj"] Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.962879 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-7lkmh"] Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.964222 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-7lkmh" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.980138 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-7lkmh"] Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.980316 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.980452 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-79h6v" Dec 06 06:41:19 crc kubenswrapper[4823]: I1206 06:41:19.989587 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2dn7\" (UniqueName: \"kubernetes.io/projected/af7acc94-0229-4055-b0ea-e5646c927e7a-kube-api-access-k2dn7\") pod \"barbican-operator-controller-manager-7d9dfd778-4xsdc\" (UID: \"af7acc94-0229-4055-b0ea-e5646c927e7a\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4xsdc" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.000572 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-d4pqr"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.001901 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-d4pqr" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.007150 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxsnr\" (UniqueName: \"kubernetes.io/projected/3ade4605-3b89-4c0e-a05c-b0d7d6ee66bf-kube-api-access-hxsnr\") pod \"cinder-operator-controller-manager-859b6ccc6-5jsvb\" (UID: \"3ade4605-3b89-4c0e-a05c-b0d7d6ee66bf\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-5jsvb" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.016121 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-d4pqr"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.018259 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-x5dzq" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.033060 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-478v2\" (UniqueName: \"kubernetes.io/projected/9bc807b4-b176-4249-9610-b4c92f99fb0b-kube-api-access-478v2\") pod \"glance-operator-controller-manager-77987cd8cd-mwbpk\" (UID: \"9bc807b4-b176-4249-9610-b4c92f99fb0b\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-mwbpk" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.033135 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg557\" (UniqueName: \"kubernetes.io/projected/69d7c5b3-6bb3-4545-bcf3-9613f979646d-kube-api-access-fg557\") pod \"designate-operator-controller-manager-78b4bc895b-9r9sg\" (UID: \"69d7c5b3-6bb3-4545-bcf3-9613f979646d\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-9r9sg" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.033179 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xss97\" (UniqueName: \"kubernetes.io/projected/22c2c4cb-ba18-4f49-9986-9095779c93dc-kube-api-access-xss97\") pod \"horizon-operator-controller-manager-68c6d99b8f-nggsj\" (UID: \"22c2c4cb-ba18-4f49-9986-9095779c93dc\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-nggsj" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.033246 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4001a5be-6496-49c2-971c-50723e76c864-cert\") pod \"infra-operator-controller-manager-57548d458d-7lkmh\" (UID: \"4001a5be-6496-49c2-971c-50723e76c864\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7lkmh" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.033301 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltvkb\" (UniqueName: \"kubernetes.io/projected/4001a5be-6496-49c2-971c-50723e76c864-kube-api-access-ltvkb\") pod \"infra-operator-controller-manager-57548d458d-7lkmh\" (UID: \"4001a5be-6496-49c2-971c-50723e76c864\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7lkmh" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.033354 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8xgb\" (UniqueName: \"kubernetes.io/projected/25f101a2-6154-43f7-b4ef-2679a4ebacc9-kube-api-access-q8xgb\") pod \"heat-operator-controller-manager-5f64f6f8bb-4fdh4\" (UID: \"25f101a2-6154-43f7-b4ef-2679a4ebacc9\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-4fdh4" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.064744 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-478v2\" (UniqueName: \"kubernetes.io/projected/9bc807b4-b176-4249-9610-b4c92f99fb0b-kube-api-access-478v2\") pod \"glance-operator-controller-manager-77987cd8cd-mwbpk\" (UID: \"9bc807b4-b176-4249-9610-b4c92f99fb0b\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-mwbpk" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.112051 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg557\" (UniqueName: \"kubernetes.io/projected/69d7c5b3-6bb3-4545-bcf3-9613f979646d-kube-api-access-fg557\") pod \"designate-operator-controller-manager-78b4bc895b-9r9sg\" (UID: \"69d7c5b3-6bb3-4545-bcf3-9613f979646d\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-9r9sg" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.112787 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-9r9sg" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.115787 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4xsdc" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.116026 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-5jsvb" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.139316 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4001a5be-6496-49c2-971c-50723e76c864-cert\") pod \"infra-operator-controller-manager-57548d458d-7lkmh\" (UID: \"4001a5be-6496-49c2-971c-50723e76c864\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7lkmh" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.139412 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltvkb\" (UniqueName: \"kubernetes.io/projected/4001a5be-6496-49c2-971c-50723e76c864-kube-api-access-ltvkb\") pod \"infra-operator-controller-manager-57548d458d-7lkmh\" (UID: \"4001a5be-6496-49c2-971c-50723e76c864\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7lkmh" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.139485 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8xgb\" (UniqueName: \"kubernetes.io/projected/25f101a2-6154-43f7-b4ef-2679a4ebacc9-kube-api-access-q8xgb\") pod \"heat-operator-controller-manager-5f64f6f8bb-4fdh4\" (UID: \"25f101a2-6154-43f7-b4ef-2679a4ebacc9\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-4fdh4" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.139578 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xss97\" (UniqueName: \"kubernetes.io/projected/22c2c4cb-ba18-4f49-9986-9095779c93dc-kube-api-access-xss97\") pod \"horizon-operator-controller-manager-68c6d99b8f-nggsj\" (UID: \"22c2c4cb-ba18-4f49-9986-9095779c93dc\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-nggsj" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.139624 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbthq\" (UniqueName: \"kubernetes.io/projected/e98ba71e-3a94-4c9e-b82a-e18dcb197cf9-kube-api-access-xbthq\") pod \"ironic-operator-controller-manager-6c548fd776-d4pqr\" (UID: \"e98ba71e-3a94-4c9e-b82a-e18dcb197cf9\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-d4pqr" Dec 06 06:41:20 crc kubenswrapper[4823]: E1206 06:41:20.145241 4823 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 06 06:41:20 crc kubenswrapper[4823]: E1206 06:41:20.145328 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4001a5be-6496-49c2-971c-50723e76c864-cert podName:4001a5be-6496-49c2-971c-50723e76c864 nodeName:}" failed. No retries permitted until 2025-12-06 06:41:20.645303709 +0000 UTC m=+981.931055669 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4001a5be-6496-49c2-971c-50723e76c864-cert") pod "infra-operator-controller-manager-57548d458d-7lkmh" (UID: "4001a5be-6496-49c2-971c-50723e76c864") : secret "infra-operator-webhook-server-cert" not found Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.152863 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-z7czp"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.161363 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-z7czp" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.176564 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-6ntbn" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.176650 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xczmc"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.177173 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-mwbpk" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.178495 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xczmc" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.191378 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltvkb\" (UniqueName: \"kubernetes.io/projected/4001a5be-6496-49c2-971c-50723e76c864-kube-api-access-ltvkb\") pod \"infra-operator-controller-manager-57548d458d-7lkmh\" (UID: \"4001a5be-6496-49c2-971c-50723e76c864\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7lkmh" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.194520 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8xgb\" (UniqueName: \"kubernetes.io/projected/25f101a2-6154-43f7-b4ef-2679a4ebacc9-kube-api-access-q8xgb\") pod \"heat-operator-controller-manager-5f64f6f8bb-4fdh4\" (UID: \"25f101a2-6154-43f7-b4ef-2679a4ebacc9\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-4fdh4" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.207471 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-n56m7"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.209064 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-n56m7" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.211578 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-p9cs2" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.215015 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xczmc"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.216940 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-4fdh4" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.223753 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-n56m7"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.232331 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xss97\" (UniqueName: \"kubernetes.io/projected/22c2c4cb-ba18-4f49-9986-9095779c93dc-kube-api-access-xss97\") pod \"horizon-operator-controller-manager-68c6d99b8f-nggsj\" (UID: \"22c2c4cb-ba18-4f49-9986-9095779c93dc\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-nggsj" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.241588 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e-catalog-content\") pod \"certified-operators-xczmc\" (UID: \"6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e\") " pod="openshift-marketplace/certified-operators-xczmc" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.263484 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e-utilities\") pod \"certified-operators-xczmc\" (UID: \"6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e\") " pod="openshift-marketplace/certified-operators-xczmc" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.263583 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnvd4\" (UniqueName: \"kubernetes.io/projected/6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e-kube-api-access-rnvd4\") pod \"certified-operators-xczmc\" (UID: \"6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e\") " pod="openshift-marketplace/certified-operators-xczmc" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.263756 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqql9\" (UniqueName: \"kubernetes.io/projected/cb125116-0c3b-4831-a05c-9076f5360e28-kube-api-access-cqql9\") pod \"keystone-operator-controller-manager-7765d96ddf-z7czp\" (UID: \"cb125116-0c3b-4831-a05c-9076f5360e28\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-z7czp" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.263937 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbthq\" (UniqueName: \"kubernetes.io/projected/e98ba71e-3a94-4c9e-b82a-e18dcb197cf9-kube-api-access-xbthq\") pod \"ironic-operator-controller-manager-6c548fd776-d4pqr\" (UID: \"e98ba71e-3a94-4c9e-b82a-e18dcb197cf9\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-d4pqr" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.246556 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-z7czp"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.265940 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-nggsj" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.289411 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-m9pvc"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.290913 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-m9pvc" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.295956 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-vwgph" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.304551 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbthq\" (UniqueName: \"kubernetes.io/projected/e98ba71e-3a94-4c9e-b82a-e18dcb197cf9-kube-api-access-xbthq\") pod \"ironic-operator-controller-manager-6c548fd776-d4pqr\" (UID: \"e98ba71e-3a94-4c9e-b82a-e18dcb197cf9\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-d4pqr" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.306776 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ncd4b"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.308136 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ncd4b" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.310145 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-bt997" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.369351 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e-utilities\") pod \"certified-operators-xczmc\" (UID: \"6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e\") " pod="openshift-marketplace/certified-operators-xczmc" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.369428 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnvd4\" (UniqueName: \"kubernetes.io/projected/6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e-kube-api-access-rnvd4\") pod \"certified-operators-xczmc\" (UID: \"6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e\") " pod="openshift-marketplace/certified-operators-xczmc" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.369509 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdmhv\" (UniqueName: \"kubernetes.io/projected/147b67a9-b422-48ba-b948-a1b42946ef1d-kube-api-access-gdmhv\") pod \"manila-operator-controller-manager-7c79b5df47-n56m7\" (UID: \"147b67a9-b422-48ba-b948-a1b42946ef1d\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-n56m7" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.369542 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqql9\" (UniqueName: \"kubernetes.io/projected/cb125116-0c3b-4831-a05c-9076f5360e28-kube-api-access-cqql9\") pod \"keystone-operator-controller-manager-7765d96ddf-z7czp\" (UID: \"cb125116-0c3b-4831-a05c-9076f5360e28\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-z7czp" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.369575 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7prk\" (UniqueName: \"kubernetes.io/projected/03d20c66-aa09-43f5-848a-b352868fb3de-kube-api-access-n7prk\") pod \"mariadb-operator-controller-manager-56bbcc9d85-m9pvc\" (UID: \"03d20c66-aa09-43f5-848a-b352868fb3de\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-m9pvc" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.369622 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e-catalog-content\") pod \"certified-operators-xczmc\" (UID: \"6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e\") " pod="openshift-marketplace/certified-operators-xczmc" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.370145 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e-utilities\") pod \"certified-operators-xczmc\" (UID: \"6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e\") " pod="openshift-marketplace/certified-operators-xczmc" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.370570 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e-catalog-content\") pod \"certified-operators-xczmc\" (UID: \"6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e\") " pod="openshift-marketplace/certified-operators-xczmc" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.375574 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-m9pvc"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.399731 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-jdgsz"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.401150 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jdgsz" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.404136 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ncd4b"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.419263 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-b45jg"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.420705 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-b45jg" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.421212 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-d4pqr" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.424790 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-crqmj" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.424978 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-v788k" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.432365 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnvd4\" (UniqueName: \"kubernetes.io/projected/6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e-kube-api-access-rnvd4\") pod \"certified-operators-xczmc\" (UID: \"6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e\") " pod="openshift-marketplace/certified-operators-xczmc" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.437350 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqql9\" (UniqueName: \"kubernetes.io/projected/cb125116-0c3b-4831-a05c-9076f5360e28-kube-api-access-cqql9\") pod \"keystone-operator-controller-manager-7765d96ddf-z7czp\" (UID: \"cb125116-0c3b-4831-a05c-9076f5360e28\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-z7czp" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.443958 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-jdgsz"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.455796 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-b45jg"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.463335 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-ggt2m"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.464853 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ggt2m" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.470452 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-hqjqk" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.471536 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdmhv\" (UniqueName: \"kubernetes.io/projected/147b67a9-b422-48ba-b948-a1b42946ef1d-kube-api-access-gdmhv\") pod \"manila-operator-controller-manager-7c79b5df47-n56m7\" (UID: \"147b67a9-b422-48ba-b948-a1b42946ef1d\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-n56m7" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.471607 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l54rz\" (UniqueName: \"kubernetes.io/projected/a72ff6fc-2086-4e96-9bc7-7298a0304e5e-kube-api-access-l54rz\") pod \"nova-operator-controller-manager-697bc559fc-jdgsz\" (UID: \"a72ff6fc-2086-4e96-9bc7-7298a0304e5e\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jdgsz" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.471646 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7prk\" (UniqueName: \"kubernetes.io/projected/03d20c66-aa09-43f5-848a-b352868fb3de-kube-api-access-n7prk\") pod \"mariadb-operator-controller-manager-56bbcc9d85-m9pvc\" (UID: \"03d20c66-aa09-43f5-848a-b352868fb3de\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-m9pvc" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.471739 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d45xf\" (UniqueName: \"kubernetes.io/projected/25eb7fcd-3634-4e2d-b2b3-2f15f9b0bfb4-kube-api-access-d45xf\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-ncd4b\" (UID: \"25eb7fcd-3634-4e2d-b2b3-2f15f9b0bfb4\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ncd4b" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.498487 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.502369 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.507710 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-ggt2m"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.509467 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdmhv\" (UniqueName: \"kubernetes.io/projected/147b67a9-b422-48ba-b948-a1b42946ef1d-kube-api-access-gdmhv\") pod \"manila-operator-controller-manager-7c79b5df47-n56m7\" (UID: \"147b67a9-b422-48ba-b948-a1b42946ef1d\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-n56m7" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.511708 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-6r62z" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.511978 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.512585 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7prk\" (UniqueName: \"kubernetes.io/projected/03d20c66-aa09-43f5-848a-b352868fb3de-kube-api-access-n7prk\") pod \"mariadb-operator-controller-manager-56bbcc9d85-m9pvc\" (UID: \"03d20c66-aa09-43f5-848a-b352868fb3de\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-m9pvc" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.528080 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-9mbh5"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.529291 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9mbh5" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.532384 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-p9k7l" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.551006 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-z7czp" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.583594 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.584287 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xczmc" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.585196 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l54rz\" (UniqueName: \"kubernetes.io/projected/a72ff6fc-2086-4e96-9bc7-7298a0304e5e-kube-api-access-l54rz\") pod \"nova-operator-controller-manager-697bc559fc-jdgsz\" (UID: \"a72ff6fc-2086-4e96-9bc7-7298a0304e5e\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jdgsz" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.586441 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0055dc6b-eac6-40aa-adad-1a5202efabb7-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb\" (UID: \"0055dc6b-eac6-40aa-adad-1a5202efabb7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.586490 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d45xf\" (UniqueName: \"kubernetes.io/projected/25eb7fcd-3634-4e2d-b2b3-2f15f9b0bfb4-kube-api-access-d45xf\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-ncd4b\" (UID: \"25eb7fcd-3634-4e2d-b2b3-2f15f9b0bfb4\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ncd4b" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.586537 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnz6g\" (UniqueName: \"kubernetes.io/projected/2c435a39-34e9-4d43-bff4-4f5d5a7f1275-kube-api-access-xnz6g\") pod \"octavia-operator-controller-manager-998648c74-b45jg\" (UID: \"2c435a39-34e9-4d43-bff4-4f5d5a7f1275\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-b45jg" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.586569 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxnhw\" (UniqueName: \"kubernetes.io/projected/424f7266-0185-4f27-9de3-1daf6a06dd2c-kube-api-access-bxnhw\") pod \"ovn-operator-controller-manager-b6456fdb6-ggt2m\" (UID: \"424f7266-0185-4f27-9de3-1daf6a06dd2c\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ggt2m" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.586672 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d825\" (UniqueName: \"kubernetes.io/projected/0055dc6b-eac6-40aa-adad-1a5202efabb7-kube-api-access-9d825\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb\" (UID: \"0055dc6b-eac6-40aa-adad-1a5202efabb7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.633166 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-n56m7" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.637115 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-9mbh5"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.651615 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-j45x4"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.655216 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-j45x4" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.668828 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nb62h"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.670585 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nb62h" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.671935 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-fd2lq" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.688806 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0055dc6b-eac6-40aa-adad-1a5202efabb7-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb\" (UID: \"0055dc6b-eac6-40aa-adad-1a5202efabb7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.688936 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d6w7\" (UniqueName: \"kubernetes.io/projected/d50c6d95-dbef-423c-8094-f8a1634d9b72-kube-api-access-4d6w7\") pod \"placement-operator-controller-manager-78f8948974-9mbh5\" (UID: \"d50c6d95-dbef-423c-8094-f8a1634d9b72\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-9mbh5" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.688977 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnz6g\" (UniqueName: \"kubernetes.io/projected/2c435a39-34e9-4d43-bff4-4f5d5a7f1275-kube-api-access-xnz6g\") pod \"octavia-operator-controller-manager-998648c74-b45jg\" (UID: \"2c435a39-34e9-4d43-bff4-4f5d5a7f1275\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-b45jg" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.689152 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxnhw\" (UniqueName: \"kubernetes.io/projected/424f7266-0185-4f27-9de3-1daf6a06dd2c-kube-api-access-bxnhw\") pod \"ovn-operator-controller-manager-b6456fdb6-ggt2m\" (UID: \"424f7266-0185-4f27-9de3-1daf6a06dd2c\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ggt2m" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.689194 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4001a5be-6496-49c2-971c-50723e76c864-cert\") pod \"infra-operator-controller-manager-57548d458d-7lkmh\" (UID: \"4001a5be-6496-49c2-971c-50723e76c864\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7lkmh" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.689265 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d825\" (UniqueName: \"kubernetes.io/projected/0055dc6b-eac6-40aa-adad-1a5202efabb7-kube-api-access-9d825\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb\" (UID: \"0055dc6b-eac6-40aa-adad-1a5202efabb7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb" Dec 06 06:41:20 crc kubenswrapper[4823]: E1206 06:41:20.689840 4823 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 06:41:20 crc kubenswrapper[4823]: E1206 06:41:20.689910 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0055dc6b-eac6-40aa-adad-1a5202efabb7-cert podName:0055dc6b-eac6-40aa-adad-1a5202efabb7 nodeName:}" failed. No retries permitted until 2025-12-06 06:41:21.189891428 +0000 UTC m=+982.475643388 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0055dc6b-eac6-40aa-adad-1a5202efabb7-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb" (UID: "0055dc6b-eac6-40aa-adad-1a5202efabb7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 06:41:20 crc kubenswrapper[4823]: E1206 06:41:20.690577 4823 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 06 06:41:20 crc kubenswrapper[4823]: E1206 06:41:20.690631 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4001a5be-6496-49c2-971c-50723e76c864-cert podName:4001a5be-6496-49c2-971c-50723e76c864 nodeName:}" failed. No retries permitted until 2025-12-06 06:41:21.690612329 +0000 UTC m=+982.976364289 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4001a5be-6496-49c2-971c-50723e76c864-cert") pod "infra-operator-controller-manager-57548d458d-7lkmh" (UID: "4001a5be-6496-49c2-971c-50723e76c864") : secret "infra-operator-webhook-server-cert" not found Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.696803 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-hqxk5" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.707008 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-m9pvc" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.728419 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d825\" (UniqueName: \"kubernetes.io/projected/0055dc6b-eac6-40aa-adad-1a5202efabb7-kube-api-access-9d825\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb\" (UID: \"0055dc6b-eac6-40aa-adad-1a5202efabb7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.736828 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-j45x4"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.742567 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnz6g\" (UniqueName: \"kubernetes.io/projected/2c435a39-34e9-4d43-bff4-4f5d5a7f1275-kube-api-access-xnz6g\") pod \"octavia-operator-controller-manager-998648c74-b45jg\" (UID: \"2c435a39-34e9-4d43-bff4-4f5d5a7f1275\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-b45jg" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.749384 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l54rz\" (UniqueName: \"kubernetes.io/projected/a72ff6fc-2086-4e96-9bc7-7298a0304e5e-kube-api-access-l54rz\") pod \"nova-operator-controller-manager-697bc559fc-jdgsz\" (UID: \"a72ff6fc-2086-4e96-9bc7-7298a0304e5e\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jdgsz" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.760686 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jdgsz" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.767141 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nb62h"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.781421 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxnhw\" (UniqueName: \"kubernetes.io/projected/424f7266-0185-4f27-9de3-1daf6a06dd2c-kube-api-access-bxnhw\") pod \"ovn-operator-controller-manager-b6456fdb6-ggt2m\" (UID: \"424f7266-0185-4f27-9de3-1daf6a06dd2c\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ggt2m" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.791778 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d6w7\" (UniqueName: \"kubernetes.io/projected/d50c6d95-dbef-423c-8094-f8a1634d9b72-kube-api-access-4d6w7\") pod \"placement-operator-controller-manager-78f8948974-9mbh5\" (UID: \"d50c6d95-dbef-423c-8094-f8a1634d9b72\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-9mbh5" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.791868 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lndmb\" (UniqueName: \"kubernetes.io/projected/b7fb4033-737a-4492-a5fd-422532e0c693-kube-api-access-lndmb\") pod \"swift-operator-controller-manager-5f8c65bbfc-j45x4\" (UID: \"b7fb4033-737a-4492-a5fd-422532e0c693\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-j45x4" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.791970 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl9lp\" (UniqueName: \"kubernetes.io/projected/18059fdc-d882-485f-9de3-0567bac485ba-kube-api-access-bl9lp\") pod \"telemetry-operator-controller-manager-76cc84c6bb-nb62h\" (UID: \"18059fdc-d882-485f-9de3-0567bac485ba\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nb62h" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.795392 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d45xf\" (UniqueName: \"kubernetes.io/projected/25eb7fcd-3634-4e2d-b2b3-2f15f9b0bfb4-kube-api-access-d45xf\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-ncd4b\" (UID: \"25eb7fcd-3634-4e2d-b2b3-2f15f9b0bfb4\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ncd4b" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.799077 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-b45jg" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.827575 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-vmvpr"] Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.838938 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ggt2m" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.952828 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d6w7\" (UniqueName: \"kubernetes.io/projected/d50c6d95-dbef-423c-8094-f8a1634d9b72-kube-api-access-4d6w7\") pod \"placement-operator-controller-manager-78f8948974-9mbh5\" (UID: \"d50c6d95-dbef-423c-8094-f8a1634d9b72\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-9mbh5" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.958402 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lndmb\" (UniqueName: \"kubernetes.io/projected/b7fb4033-737a-4492-a5fd-422532e0c693-kube-api-access-lndmb\") pod \"swift-operator-controller-manager-5f8c65bbfc-j45x4\" (UID: \"b7fb4033-737a-4492-a5fd-422532e0c693\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-j45x4" Dec 06 06:41:20 crc kubenswrapper[4823]: I1206 06:41:20.958897 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl9lp\" (UniqueName: \"kubernetes.io/projected/18059fdc-d882-485f-9de3-0567bac485ba-kube-api-access-bl9lp\") pod \"telemetry-operator-controller-manager-76cc84c6bb-nb62h\" (UID: \"18059fdc-d882-485f-9de3-0567bac485ba\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nb62h" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.066938 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ncd4b" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.067561 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lndmb\" (UniqueName: \"kubernetes.io/projected/b7fb4033-737a-4492-a5fd-422532e0c693-kube-api-access-lndmb\") pod \"swift-operator-controller-manager-5f8c65bbfc-j45x4\" (UID: \"b7fb4033-737a-4492-a5fd-422532e0c693\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-j45x4" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.073553 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl9lp\" (UniqueName: \"kubernetes.io/projected/18059fdc-d882-485f-9de3-0567bac485ba-kube-api-access-bl9lp\") pod \"telemetry-operator-controller-manager-76cc84c6bb-nb62h\" (UID: \"18059fdc-d882-485f-9de3-0567bac485ba\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nb62h" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.074853 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd68fb56b-hkzrc"] Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.076685 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vmvpr" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.081125 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd68fb56b-hkzrc" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.089473 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-ph6sm" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.090500 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-7qsws" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.147797 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-j45x4" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.581004 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0055dc6b-eac6-40aa-adad-1a5202efabb7-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb\" (UID: \"0055dc6b-eac6-40aa-adad-1a5202efabb7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb" Dec 06 06:41:21 crc kubenswrapper[4823]: E1206 06:41:21.581200 4823 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 06:41:21 crc kubenswrapper[4823]: E1206 06:41:21.581249 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0055dc6b-eac6-40aa-adad-1a5202efabb7-cert podName:0055dc6b-eac6-40aa-adad-1a5202efabb7 nodeName:}" failed. No retries permitted until 2025-12-06 06:41:22.581232938 +0000 UTC m=+983.866984898 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0055dc6b-eac6-40aa-adad-1a5202efabb7-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb" (UID: "0055dc6b-eac6-40aa-adad-1a5202efabb7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.583201 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nb62h" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.584008 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9mbh5" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.653037 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd68fb56b-hkzrc"] Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.653523 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-vmvpr"] Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.688385 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh"] Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.689720 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.694182 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh"] Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.697786 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-webhook-certs\") pod \"openstack-operator-controller-manager-75cbb7bbf4-bcdjh\" (UID: \"78374f83-e964-486e-9590-b6bb562a5185\") " pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.697862 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-metrics-certs\") pod \"openstack-operator-controller-manager-75cbb7bbf4-bcdjh\" (UID: \"78374f83-e964-486e-9590-b6bb562a5185\") " pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.697902 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmmfn\" (UniqueName: \"kubernetes.io/projected/d98bfe02-e1d8-4bdf-a2e2-cf9a83964511-kube-api-access-mmmfn\") pod \"watcher-operator-controller-manager-6dd68fb56b-hkzrc\" (UID: \"d98bfe02-e1d8-4bdf-a2e2-cf9a83964511\") " pod="openstack-operators/watcher-operator-controller-manager-6dd68fb56b-hkzrc" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.697922 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpnhx\" (UniqueName: \"kubernetes.io/projected/433b05ca-a4e2-4e7f-96d2-53e6efb9efc7-kube-api-access-xpnhx\") pod \"test-operator-controller-manager-5854674fcc-vmvpr\" (UID: \"433b05ca-a4e2-4e7f-96d2-53e6efb9efc7\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-vmvpr" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.697959 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4001a5be-6496-49c2-971c-50723e76c864-cert\") pod \"infra-operator-controller-manager-57548d458d-7lkmh\" (UID: \"4001a5be-6496-49c2-971c-50723e76c864\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7lkmh" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.697981 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdccn\" (UniqueName: \"kubernetes.io/projected/78374f83-e964-486e-9590-b6bb562a5185-kube-api-access-fdccn\") pod \"openstack-operator-controller-manager-75cbb7bbf4-bcdjh\" (UID: \"78374f83-e964-486e-9590-b6bb562a5185\") " pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" Dec 06 06:41:21 crc kubenswrapper[4823]: E1206 06:41:21.698187 4823 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 06 06:41:21 crc kubenswrapper[4823]: E1206 06:41:21.698231 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4001a5be-6496-49c2-971c-50723e76c864-cert podName:4001a5be-6496-49c2-971c-50723e76c864 nodeName:}" failed. No retries permitted until 2025-12-06 06:41:23.69821513 +0000 UTC m=+984.983967090 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4001a5be-6496-49c2-971c-50723e76c864-cert") pod "infra-operator-controller-manager-57548d458d-7lkmh" (UID: "4001a5be-6496-49c2-971c-50723e76c864") : secret "infra-operator-webhook-server-cert" not found Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.699569 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-szgl9" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.701450 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.703197 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.756702 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z92rh"] Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.757755 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z92rh"] Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.757830 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z92rh" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.761535 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-fk9qt" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.799471 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdccn\" (UniqueName: \"kubernetes.io/projected/78374f83-e964-486e-9590-b6bb562a5185-kube-api-access-fdccn\") pod \"openstack-operator-controller-manager-75cbb7bbf4-bcdjh\" (UID: \"78374f83-e964-486e-9590-b6bb562a5185\") " pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.806638 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-webhook-certs\") pod \"openstack-operator-controller-manager-75cbb7bbf4-bcdjh\" (UID: \"78374f83-e964-486e-9590-b6bb562a5185\") " pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.806863 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-metrics-certs\") pod \"openstack-operator-controller-manager-75cbb7bbf4-bcdjh\" (UID: \"78374f83-e964-486e-9590-b6bb562a5185\") " pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.806943 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmmfn\" (UniqueName: \"kubernetes.io/projected/d98bfe02-e1d8-4bdf-a2e2-cf9a83964511-kube-api-access-mmmfn\") pod \"watcher-operator-controller-manager-6dd68fb56b-hkzrc\" (UID: \"d98bfe02-e1d8-4bdf-a2e2-cf9a83964511\") " pod="openstack-operators/watcher-operator-controller-manager-6dd68fb56b-hkzrc" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.806982 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpnhx\" (UniqueName: \"kubernetes.io/projected/433b05ca-a4e2-4e7f-96d2-53e6efb9efc7-kube-api-access-xpnhx\") pod \"test-operator-controller-manager-5854674fcc-vmvpr\" (UID: \"433b05ca-a4e2-4e7f-96d2-53e6efb9efc7\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-vmvpr" Dec 06 06:41:21 crc kubenswrapper[4823]: E1206 06:41:21.808547 4823 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 06 06:41:21 crc kubenswrapper[4823]: E1206 06:41:21.808628 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-webhook-certs podName:78374f83-e964-486e-9590-b6bb562a5185 nodeName:}" failed. No retries permitted until 2025-12-06 06:41:22.30860605 +0000 UTC m=+983.594358010 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-webhook-certs") pod "openstack-operator-controller-manager-75cbb7bbf4-bcdjh" (UID: "78374f83-e964-486e-9590-b6bb562a5185") : secret "webhook-server-cert" not found Dec 06 06:41:21 crc kubenswrapper[4823]: E1206 06:41:21.808650 4823 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 06 06:41:21 crc kubenswrapper[4823]: E1206 06:41:21.808722 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-metrics-certs podName:78374f83-e964-486e-9590-b6bb562a5185 nodeName:}" failed. No retries permitted until 2025-12-06 06:41:22.308705733 +0000 UTC m=+983.594457683 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-metrics-certs") pod "openstack-operator-controller-manager-75cbb7bbf4-bcdjh" (UID: "78374f83-e964-486e-9590-b6bb562a5185") : secret "metrics-server-cert" not found Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.890123 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-5jsvb"] Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.903205 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpnhx\" (UniqueName: \"kubernetes.io/projected/433b05ca-a4e2-4e7f-96d2-53e6efb9efc7-kube-api-access-xpnhx\") pod \"test-operator-controller-manager-5854674fcc-vmvpr\" (UID: \"433b05ca-a4e2-4e7f-96d2-53e6efb9efc7\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-vmvpr" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.908536 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppc8q\" (UniqueName: \"kubernetes.io/projected/86d88b9b-a5a9-47e0-bfdb-381ef80693f3-kube-api-access-ppc8q\") pod \"rabbitmq-cluster-operator-manager-668c99d594-z92rh\" (UID: \"86d88b9b-a5a9-47e0-bfdb-381ef80693f3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z92rh" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.908940 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdccn\" (UniqueName: \"kubernetes.io/projected/78374f83-e964-486e-9590-b6bb562a5185-kube-api-access-fdccn\") pod \"openstack-operator-controller-manager-75cbb7bbf4-bcdjh\" (UID: \"78374f83-e964-486e-9590-b6bb562a5185\") " pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.911202 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmmfn\" (UniqueName: \"kubernetes.io/projected/d98bfe02-e1d8-4bdf-a2e2-cf9a83964511-kube-api-access-mmmfn\") pod \"watcher-operator-controller-manager-6dd68fb56b-hkzrc\" (UID: \"d98bfe02-e1d8-4bdf-a2e2-cf9a83964511\") " pod="openstack-operators/watcher-operator-controller-manager-6dd68fb56b-hkzrc" Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.936632 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-mwbpk"] Dec 06 06:41:21 crc kubenswrapper[4823]: I1206 06:41:21.981471 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vmvpr" Dec 06 06:41:22 crc kubenswrapper[4823]: I1206 06:41:22.003096 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd68fb56b-hkzrc" Dec 06 06:41:22 crc kubenswrapper[4823]: I1206 06:41:22.012559 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppc8q\" (UniqueName: \"kubernetes.io/projected/86d88b9b-a5a9-47e0-bfdb-381ef80693f3-kube-api-access-ppc8q\") pod \"rabbitmq-cluster-operator-manager-668c99d594-z92rh\" (UID: \"86d88b9b-a5a9-47e0-bfdb-381ef80693f3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z92rh" Dec 06 06:41:22 crc kubenswrapper[4823]: I1206 06:41:22.177043 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppc8q\" (UniqueName: \"kubernetes.io/projected/86d88b9b-a5a9-47e0-bfdb-381ef80693f3-kube-api-access-ppc8q\") pod \"rabbitmq-cluster-operator-manager-668c99d594-z92rh\" (UID: \"86d88b9b-a5a9-47e0-bfdb-381ef80693f3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z92rh" Dec 06 06:41:22 crc kubenswrapper[4823]: I1206 06:41:22.242701 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z92rh" Dec 06 06:41:22 crc kubenswrapper[4823]: I1206 06:41:22.386599 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-metrics-certs\") pod \"openstack-operator-controller-manager-75cbb7bbf4-bcdjh\" (UID: \"78374f83-e964-486e-9590-b6bb562a5185\") " pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" Dec 06 06:41:22 crc kubenswrapper[4823]: I1206 06:41:22.386765 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-webhook-certs\") pod \"openstack-operator-controller-manager-75cbb7bbf4-bcdjh\" (UID: \"78374f83-e964-486e-9590-b6bb562a5185\") " pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" Dec 06 06:41:22 crc kubenswrapper[4823]: E1206 06:41:22.387000 4823 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 06 06:41:22 crc kubenswrapper[4823]: E1206 06:41:22.387074 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-webhook-certs podName:78374f83-e964-486e-9590-b6bb562a5185 nodeName:}" failed. No retries permitted until 2025-12-06 06:41:23.38705456 +0000 UTC m=+984.672806520 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-webhook-certs") pod "openstack-operator-controller-manager-75cbb7bbf4-bcdjh" (UID: "78374f83-e964-486e-9590-b6bb562a5185") : secret "webhook-server-cert" not found Dec 06 06:41:22 crc kubenswrapper[4823]: E1206 06:41:22.387694 4823 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 06 06:41:22 crc kubenswrapper[4823]: E1206 06:41:22.387999 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-metrics-certs podName:78374f83-e964-486e-9590-b6bb562a5185 nodeName:}" failed. No retries permitted until 2025-12-06 06:41:23.387966347 +0000 UTC m=+984.673718307 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-metrics-certs") pod "openstack-operator-controller-manager-75cbb7bbf4-bcdjh" (UID: "78374f83-e964-486e-9590-b6bb562a5185") : secret "metrics-server-cert" not found Dec 06 06:41:22 crc kubenswrapper[4823]: I1206 06:41:22.724842 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0055dc6b-eac6-40aa-adad-1a5202efabb7-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb\" (UID: \"0055dc6b-eac6-40aa-adad-1a5202efabb7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb" Dec 06 06:41:22 crc kubenswrapper[4823]: E1206 06:41:22.725229 4823 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 06:41:22 crc kubenswrapper[4823]: E1206 06:41:22.725326 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0055dc6b-eac6-40aa-adad-1a5202efabb7-cert podName:0055dc6b-eac6-40aa-adad-1a5202efabb7 nodeName:}" failed. No retries permitted until 2025-12-06 06:41:24.725290296 +0000 UTC m=+986.011042256 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0055dc6b-eac6-40aa-adad-1a5202efabb7-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb" (UID: "0055dc6b-eac6-40aa-adad-1a5202efabb7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 06:41:22 crc kubenswrapper[4823]: I1206 06:41:22.900953 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-mwbpk" event={"ID":"9bc807b4-b176-4249-9610-b4c92f99fb0b","Type":"ContainerStarted","Data":"6210e13e5f33cc6c527221c2935ea8f9feae9b0175e4d16c1866a0f4fef7763d"} Dec 06 06:41:22 crc kubenswrapper[4823]: I1206 06:41:22.952526 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-5jsvb" event={"ID":"3ade4605-3b89-4c0e-a05c-b0d7d6ee66bf","Type":"ContainerStarted","Data":"88da90108222d1e8dafc7c914071926b88bae38a2ebffa7c50db3ddcc236eb2e"} Dec 06 06:41:23 crc kubenswrapper[4823]: I1206 06:41:23.287217 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-4fdh4"] Dec 06 06:41:23 crc kubenswrapper[4823]: I1206 06:41:23.424063 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-4xsdc"] Dec 06 06:41:23 crc kubenswrapper[4823]: I1206 06:41:23.456522 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-webhook-certs\") pod \"openstack-operator-controller-manager-75cbb7bbf4-bcdjh\" (UID: \"78374f83-e964-486e-9590-b6bb562a5185\") " pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" Dec 06 06:41:23 crc kubenswrapper[4823]: I1206 06:41:23.456644 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-metrics-certs\") pod \"openstack-operator-controller-manager-75cbb7bbf4-bcdjh\" (UID: \"78374f83-e964-486e-9590-b6bb562a5185\") " pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" Dec 06 06:41:23 crc kubenswrapper[4823]: E1206 06:41:23.456807 4823 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 06 06:41:23 crc kubenswrapper[4823]: E1206 06:41:23.456885 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-metrics-certs podName:78374f83-e964-486e-9590-b6bb562a5185 nodeName:}" failed. No retries permitted until 2025-12-06 06:41:25.456858135 +0000 UTC m=+986.742610105 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-metrics-certs") pod "openstack-operator-controller-manager-75cbb7bbf4-bcdjh" (UID: "78374f83-e964-486e-9590-b6bb562a5185") : secret "metrics-server-cert" not found Dec 06 06:41:23 crc kubenswrapper[4823]: E1206 06:41:23.456808 4823 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 06 06:41:23 crc kubenswrapper[4823]: E1206 06:41:23.457190 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-webhook-certs podName:78374f83-e964-486e-9590-b6bb562a5185 nodeName:}" failed. No retries permitted until 2025-12-06 06:41:25.457153114 +0000 UTC m=+986.742905124 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-webhook-certs") pod "openstack-operator-controller-manager-75cbb7bbf4-bcdjh" (UID: "78374f83-e964-486e-9590-b6bb562a5185") : secret "webhook-server-cert" not found Dec 06 06:41:23 crc kubenswrapper[4823]: I1206 06:41:23.460436 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-9r9sg"] Dec 06 06:41:23 crc kubenswrapper[4823]: I1206 06:41:23.783450 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4001a5be-6496-49c2-971c-50723e76c864-cert\") pod \"infra-operator-controller-manager-57548d458d-7lkmh\" (UID: \"4001a5be-6496-49c2-971c-50723e76c864\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7lkmh" Dec 06 06:41:23 crc kubenswrapper[4823]: E1206 06:41:23.785173 4823 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 06 06:41:23 crc kubenswrapper[4823]: E1206 06:41:23.785277 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4001a5be-6496-49c2-971c-50723e76c864-cert podName:4001a5be-6496-49c2-971c-50723e76c864 nodeName:}" failed. No retries permitted until 2025-12-06 06:41:27.785253446 +0000 UTC m=+989.071005406 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4001a5be-6496-49c2-971c-50723e76c864-cert") pod "infra-operator-controller-manager-57548d458d-7lkmh" (UID: "4001a5be-6496-49c2-971c-50723e76c864") : secret "infra-operator-webhook-server-cert" not found Dec 06 06:41:23 crc kubenswrapper[4823]: I1206 06:41:23.983009 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-jdgsz"] Dec 06 06:41:24 crc kubenswrapper[4823]: I1206 06:41:24.008610 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-4fdh4" event={"ID":"25f101a2-6154-43f7-b4ef-2679a4ebacc9","Type":"ContainerStarted","Data":"04d1a4d6deaa74d825a535c2a684b6883fffae9b2e87a1af3b25c58d66f2b483"} Dec 06 06:41:24 crc kubenswrapper[4823]: I1206 06:41:24.010122 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-9r9sg" event={"ID":"69d7c5b3-6bb3-4545-bcf3-9613f979646d","Type":"ContainerStarted","Data":"b3342dcae6f880930b2c6012a26da47a189cc3bf40e4096902e5ee92007ae11b"} Dec 06 06:41:24 crc kubenswrapper[4823]: I1206 06:41:24.013097 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4xsdc" event={"ID":"af7acc94-0229-4055-b0ea-e5646c927e7a","Type":"ContainerStarted","Data":"a12841ba58d1514e2d778ce8565782321988c095d625931184f7520ebe8d0c5e"} Dec 06 06:41:24 crc kubenswrapper[4823]: I1206 06:41:24.030280 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-nggsj"] Dec 06 06:41:24 crc kubenswrapper[4823]: W1206 06:41:24.107834 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22c2c4cb_ba18_4f49_9986_9095779c93dc.slice/crio-8a0c6dd3dee0c94f70fee0d7f88efed64b05aa84b55b9de1b6d08f500aaeafa2 WatchSource:0}: Error finding container 8a0c6dd3dee0c94f70fee0d7f88efed64b05aa84b55b9de1b6d08f500aaeafa2: Status 404 returned error can't find the container with id 8a0c6dd3dee0c94f70fee0d7f88efed64b05aa84b55b9de1b6d08f500aaeafa2 Dec 06 06:41:24 crc kubenswrapper[4823]: I1206 06:41:24.167515 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-b45jg"] Dec 06 06:41:24 crc kubenswrapper[4823]: I1206 06:41:24.451212 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-ggt2m"] Dec 06 06:41:24 crc kubenswrapper[4823]: I1206 06:41:24.522462 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-z7czp"] Dec 06 06:41:24 crc kubenswrapper[4823]: I1206 06:41:24.552117 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-m9pvc"] Dec 06 06:41:24 crc kubenswrapper[4823]: W1206 06:41:24.554676 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03d20c66_aa09_43f5_848a_b352868fb3de.slice/crio-c5147d4dd6bd8920d4db036e602b593f727ce9a418eee07afa29391e0fa1fe03 WatchSource:0}: Error finding container c5147d4dd6bd8920d4db036e602b593f727ce9a418eee07afa29391e0fa1fe03: Status 404 returned error can't find the container with id c5147d4dd6bd8920d4db036e602b593f727ce9a418eee07afa29391e0fa1fe03 Dec 06 06:41:24 crc kubenswrapper[4823]: I1206 06:41:24.574016 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-d4pqr"] Dec 06 06:41:24 crc kubenswrapper[4823]: I1206 06:41:24.586100 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ncd4b"] Dec 06 06:41:24 crc kubenswrapper[4823]: W1206 06:41:24.598255 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb125116_0c3b_4831_a05c_9076f5360e28.slice/crio-aa11c33bec6ce3c3906d97219fe4c382badb629d53d7247d56f657ee04586047 WatchSource:0}: Error finding container aa11c33bec6ce3c3906d97219fe4c382badb629d53d7247d56f657ee04586047: Status 404 returned error can't find the container with id aa11c33bec6ce3c3906d97219fe4c382badb629d53d7247d56f657ee04586047 Dec 06 06:41:24 crc kubenswrapper[4823]: W1206 06:41:24.627396 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25eb7fcd_3634_4e2d_b2b3_2f15f9b0bfb4.slice/crio-aa3cb5ad4d53e9b045c589cbad0d66a0ff1830405341ccc1f9259be2f143e69e WatchSource:0}: Error finding container aa3cb5ad4d53e9b045c589cbad0d66a0ff1830405341ccc1f9259be2f143e69e: Status 404 returned error can't find the container with id aa3cb5ad4d53e9b045c589cbad0d66a0ff1830405341ccc1f9259be2f143e69e Dec 06 06:41:24 crc kubenswrapper[4823]: I1206 06:41:24.648430 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xczmc"] Dec 06 06:41:24 crc kubenswrapper[4823]: I1206 06:41:24.656177 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z92rh"] Dec 06 06:41:24 crc kubenswrapper[4823]: I1206 06:41:24.667068 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-9mbh5"] Dec 06 06:41:24 crc kubenswrapper[4823]: I1206 06:41:24.679798 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd68fb56b-hkzrc"] Dec 06 06:41:24 crc kubenswrapper[4823]: W1206 06:41:24.680469 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f5f7006_4db4_4e09_83e0_f7a1ef5d3f2e.slice/crio-45a1dda0af9f834a50306612f4c55a059932ec9ad8bac0970eeb4ea91d92a992 WatchSource:0}: Error finding container 45a1dda0af9f834a50306612f4c55a059932ec9ad8bac0970eeb4ea91d92a992: Status 404 returned error can't find the container with id 45a1dda0af9f834a50306612f4c55a059932ec9ad8bac0970eeb4ea91d92a992 Dec 06 06:41:24 crc kubenswrapper[4823]: I1206 06:41:24.686781 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-j45x4"] Dec 06 06:41:24 crc kubenswrapper[4823]: W1206 06:41:24.695951 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86d88b9b_a5a9_47e0_bfdb_381ef80693f3.slice/crio-b63097e59dab20d51085c6f6e3658462d3689c78a9b6ce124d5b95e584a6c5ec WatchSource:0}: Error finding container b63097e59dab20d51085c6f6e3658462d3689c78a9b6ce124d5b95e584a6c5ec: Status 404 returned error can't find the container with id b63097e59dab20d51085c6f6e3658462d3689c78a9b6ce124d5b95e584a6c5ec Dec 06 06:41:24 crc kubenswrapper[4823]: I1206 06:41:24.702303 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-n56m7"] Dec 06 06:41:24 crc kubenswrapper[4823]: W1206 06:41:24.768889 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd98bfe02_e1d8_4bdf_a2e2_cf9a83964511.slice/crio-8fe789150729289d2495cc30b96a5e2549cee69de302270dbfc50c2d95a86106 WatchSource:0}: Error finding container 8fe789150729289d2495cc30b96a5e2549cee69de302270dbfc50c2d95a86106: Status 404 returned error can't find the container with id 8fe789150729289d2495cc30b96a5e2549cee69de302270dbfc50c2d95a86106 Dec 06 06:41:24 crc kubenswrapper[4823]: I1206 06:41:24.820244 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0055dc6b-eac6-40aa-adad-1a5202efabb7-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb\" (UID: \"0055dc6b-eac6-40aa-adad-1a5202efabb7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb" Dec 06 06:41:24 crc kubenswrapper[4823]: E1206 06:41:24.820552 4823 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 06:41:24 crc kubenswrapper[4823]: E1206 06:41:24.820641 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0055dc6b-eac6-40aa-adad-1a5202efabb7-cert podName:0055dc6b-eac6-40aa-adad-1a5202efabb7 nodeName:}" failed. No retries permitted until 2025-12-06 06:41:28.820597172 +0000 UTC m=+990.106349142 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0055dc6b-eac6-40aa-adad-1a5202efabb7-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb" (UID: "0055dc6b-eac6-40aa-adad-1a5202efabb7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 06:41:24 crc kubenswrapper[4823]: I1206 06:41:24.877603 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nb62h"] Dec 06 06:41:24 crc kubenswrapper[4823]: I1206 06:41:24.885018 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-vmvpr"] Dec 06 06:41:24 crc kubenswrapper[4823]: W1206 06:41:24.931489 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod433b05ca_a4e2_4e7f_96d2_53e6efb9efc7.slice/crio-9676c7b74399c6437bce6f70344299952a40b8cbe1f43eaf207aeba3406835f2 WatchSource:0}: Error finding container 9676c7b74399c6437bce6f70344299952a40b8cbe1f43eaf207aeba3406835f2: Status 404 returned error can't find the container with id 9676c7b74399c6437bce6f70344299952a40b8cbe1f43eaf207aeba3406835f2 Dec 06 06:41:24 crc kubenswrapper[4823]: E1206 06:41:24.937259 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xpnhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-vmvpr_openstack-operators(433b05ca-a4e2-4e7f-96d2-53e6efb9efc7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 06:41:24 crc kubenswrapper[4823]: W1206 06:41:24.943564 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18059fdc_d882_485f_9de3_0567bac485ba.slice/crio-4f28a43a67fed3a0ef1a23ecccecfd94af74b985bf30331cbe312bc435e65382 WatchSource:0}: Error finding container 4f28a43a67fed3a0ef1a23ecccecfd94af74b985bf30331cbe312bc435e65382: Status 404 returned error can't find the container with id 4f28a43a67fed3a0ef1a23ecccecfd94af74b985bf30331cbe312bc435e65382 Dec 06 06:41:24 crc kubenswrapper[4823]: E1206 06:41:24.952653 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bl9lp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-nb62h_openstack-operators(18059fdc-d882-485f-9de3-0567bac485ba): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 06:41:24 crc kubenswrapper[4823]: E1206 06:41:24.955095 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bl9lp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-nb62h_openstack-operators(18059fdc-d882-485f-9de3-0567bac485ba): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 06:41:24 crc kubenswrapper[4823]: E1206 06:41:24.956274 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nb62h" podUID="18059fdc-d882-485f-9de3-0567bac485ba" Dec 06 06:41:25 crc kubenswrapper[4823]: I1206 06:41:25.065181 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-nggsj" event={"ID":"22c2c4cb-ba18-4f49-9986-9095779c93dc","Type":"ContainerStarted","Data":"8a0c6dd3dee0c94f70fee0d7f88efed64b05aa84b55b9de1b6d08f500aaeafa2"} Dec 06 06:41:25 crc kubenswrapper[4823]: I1206 06:41:25.069109 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-n56m7" event={"ID":"147b67a9-b422-48ba-b948-a1b42946ef1d","Type":"ContainerStarted","Data":"33d8459d551060505e36c8d8048d6bfa526cadfcca5a26378a1a7bdcd6204aa7"} Dec 06 06:41:25 crc kubenswrapper[4823]: I1206 06:41:25.076846 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-j45x4" event={"ID":"b7fb4033-737a-4492-a5fd-422532e0c693","Type":"ContainerStarted","Data":"63ecae4a747f799a0b56192d0e7fa1bfd139374393af25356132f8cafdfc6d27"} Dec 06 06:41:25 crc kubenswrapper[4823]: I1206 06:41:25.080799 4823 generic.go:334] "Generic (PLEG): container finished" podID="6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e" containerID="4e156fe182bd230fc4881563a353ffdd4ccd6215b7d7be46bdc61a7fe232ec2d" exitCode=0 Dec 06 06:41:25 crc kubenswrapper[4823]: I1206 06:41:25.080869 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xczmc" event={"ID":"6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e","Type":"ContainerDied","Data":"4e156fe182bd230fc4881563a353ffdd4ccd6215b7d7be46bdc61a7fe232ec2d"} Dec 06 06:41:25 crc kubenswrapper[4823]: I1206 06:41:25.080897 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xczmc" event={"ID":"6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e","Type":"ContainerStarted","Data":"45a1dda0af9f834a50306612f4c55a059932ec9ad8bac0970eeb4ea91d92a992"} Dec 06 06:41:25 crc kubenswrapper[4823]: E1206 06:41:25.089301 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rnvd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-xczmc_openshift-marketplace(6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 06:41:25 crc kubenswrapper[4823]: E1206 06:41:25.090384 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-marketplace/certified-operators-xczmc" podUID="6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e" Dec 06 06:41:25 crc kubenswrapper[4823]: I1206 06:41:25.091487 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd68fb56b-hkzrc" event={"ID":"d98bfe02-e1d8-4bdf-a2e2-cf9a83964511","Type":"ContainerStarted","Data":"8fe789150729289d2495cc30b96a5e2549cee69de302270dbfc50c2d95a86106"} Dec 06 06:41:25 crc kubenswrapper[4823]: I1206 06:41:25.092603 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z92rh" event={"ID":"86d88b9b-a5a9-47e0-bfdb-381ef80693f3","Type":"ContainerStarted","Data":"b63097e59dab20d51085c6f6e3658462d3689c78a9b6ce124d5b95e584a6c5ec"} Dec 06 06:41:25 crc kubenswrapper[4823]: I1206 06:41:25.094224 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-d4pqr" event={"ID":"e98ba71e-3a94-4c9e-b82a-e18dcb197cf9","Type":"ContainerStarted","Data":"6b161a3531883a985faaa2c46d71f98634c72c74382461622da986a818f57613"} Dec 06 06:41:25 crc kubenswrapper[4823]: I1206 06:41:25.102034 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nb62h" event={"ID":"18059fdc-d882-485f-9de3-0567bac485ba","Type":"ContainerStarted","Data":"4f28a43a67fed3a0ef1a23ecccecfd94af74b985bf30331cbe312bc435e65382"} Dec 06 06:41:25 crc kubenswrapper[4823]: E1206 06:41:25.106987 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nb62h" podUID="18059fdc-d882-485f-9de3-0567bac485ba" Dec 06 06:41:25 crc kubenswrapper[4823]: I1206 06:41:25.107384 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ggt2m" event={"ID":"424f7266-0185-4f27-9de3-1daf6a06dd2c","Type":"ContainerStarted","Data":"cd4459ff7dc1a9bc49e944d694ffd9de83ed840bf7a1d9c199168d044b97fb06"} Dec 06 06:41:25 crc kubenswrapper[4823]: I1206 06:41:25.108344 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-m9pvc" event={"ID":"03d20c66-aa09-43f5-848a-b352868fb3de","Type":"ContainerStarted","Data":"c5147d4dd6bd8920d4db036e602b593f727ce9a418eee07afa29391e0fa1fe03"} Dec 06 06:41:25 crc kubenswrapper[4823]: I1206 06:41:25.114302 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-b45jg" event={"ID":"2c435a39-34e9-4d43-bff4-4f5d5a7f1275","Type":"ContainerStarted","Data":"a16d4c60dbcb41951b51645db2ed834175a9e856939a960fc1d2a6b11aa91755"} Dec 06 06:41:25 crc kubenswrapper[4823]: I1206 06:41:25.116954 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-z7czp" event={"ID":"cb125116-0c3b-4831-a05c-9076f5360e28","Type":"ContainerStarted","Data":"aa11c33bec6ce3c3906d97219fe4c382badb629d53d7247d56f657ee04586047"} Dec 06 06:41:25 crc kubenswrapper[4823]: I1206 06:41:25.118127 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jdgsz" event={"ID":"a72ff6fc-2086-4e96-9bc7-7298a0304e5e","Type":"ContainerStarted","Data":"12dd43bca5c099e3c4a24a98d766799f6ae89127e54928fb735f277e17548038"} Dec 06 06:41:25 crc kubenswrapper[4823]: I1206 06:41:25.120550 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vmvpr" event={"ID":"433b05ca-a4e2-4e7f-96d2-53e6efb9efc7","Type":"ContainerStarted","Data":"9676c7b74399c6437bce6f70344299952a40b8cbe1f43eaf207aeba3406835f2"} Dec 06 06:41:25 crc kubenswrapper[4823]: I1206 06:41:25.121793 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9mbh5" event={"ID":"d50c6d95-dbef-423c-8094-f8a1634d9b72","Type":"ContainerStarted","Data":"971b9dc8c5b93b0d5fc9fbeadcec3d541a195139f021de517f35c4489886770c"} Dec 06 06:41:25 crc kubenswrapper[4823]: I1206 06:41:25.124324 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ncd4b" event={"ID":"25eb7fcd-3634-4e2d-b2b3-2f15f9b0bfb4","Type":"ContainerStarted","Data":"aa3cb5ad4d53e9b045c589cbad0d66a0ff1830405341ccc1f9259be2f143e69e"} Dec 06 06:41:25 crc kubenswrapper[4823]: I1206 06:41:25.536317 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-metrics-certs\") pod \"openstack-operator-controller-manager-75cbb7bbf4-bcdjh\" (UID: \"78374f83-e964-486e-9590-b6bb562a5185\") " pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" Dec 06 06:41:25 crc kubenswrapper[4823]: I1206 06:41:25.537159 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-webhook-certs\") pod \"openstack-operator-controller-manager-75cbb7bbf4-bcdjh\" (UID: \"78374f83-e964-486e-9590-b6bb562a5185\") " pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" Dec 06 06:41:25 crc kubenswrapper[4823]: E1206 06:41:25.537362 4823 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 06 06:41:25 crc kubenswrapper[4823]: E1206 06:41:25.537436 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-webhook-certs podName:78374f83-e964-486e-9590-b6bb562a5185 nodeName:}" failed. No retries permitted until 2025-12-06 06:41:29.537416923 +0000 UTC m=+990.823168883 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-webhook-certs") pod "openstack-operator-controller-manager-75cbb7bbf4-bcdjh" (UID: "78374f83-e964-486e-9590-b6bb562a5185") : secret "webhook-server-cert" not found Dec 06 06:41:25 crc kubenswrapper[4823]: E1206 06:41:25.537840 4823 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 06 06:41:25 crc kubenswrapper[4823]: E1206 06:41:25.537923 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-metrics-certs podName:78374f83-e964-486e-9590-b6bb562a5185 nodeName:}" failed. No retries permitted until 2025-12-06 06:41:29.537906197 +0000 UTC m=+990.823658157 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-metrics-certs") pod "openstack-operator-controller-manager-75cbb7bbf4-bcdjh" (UID: "78374f83-e964-486e-9590-b6bb562a5185") : secret "metrics-server-cert" not found Dec 06 06:41:26 crc kubenswrapper[4823]: E1206 06:41:26.201854 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-xczmc" podUID="6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e" Dec 06 06:41:26 crc kubenswrapper[4823]: E1206 06:41:26.206728 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nb62h" podUID="18059fdc-d882-485f-9de3-0567bac485ba" Dec 06 06:41:27 crc kubenswrapper[4823]: I1206 06:41:27.789421 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4001a5be-6496-49c2-971c-50723e76c864-cert\") pod \"infra-operator-controller-manager-57548d458d-7lkmh\" (UID: \"4001a5be-6496-49c2-971c-50723e76c864\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7lkmh" Dec 06 06:41:27 crc kubenswrapper[4823]: E1206 06:41:27.789704 4823 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 06 06:41:27 crc kubenswrapper[4823]: E1206 06:41:27.789984 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4001a5be-6496-49c2-971c-50723e76c864-cert podName:4001a5be-6496-49c2-971c-50723e76c864 nodeName:}" failed. No retries permitted until 2025-12-06 06:41:35.789961268 +0000 UTC m=+997.075713228 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4001a5be-6496-49c2-971c-50723e76c864-cert") pod "infra-operator-controller-manager-57548d458d-7lkmh" (UID: "4001a5be-6496-49c2-971c-50723e76c864") : secret "infra-operator-webhook-server-cert" not found Dec 06 06:41:28 crc kubenswrapper[4823]: I1206 06:41:28.840254 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0055dc6b-eac6-40aa-adad-1a5202efabb7-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb\" (UID: \"0055dc6b-eac6-40aa-adad-1a5202efabb7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb" Dec 06 06:41:28 crc kubenswrapper[4823]: E1206 06:41:28.840496 4823 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 06:41:28 crc kubenswrapper[4823]: E1206 06:41:28.840547 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0055dc6b-eac6-40aa-adad-1a5202efabb7-cert podName:0055dc6b-eac6-40aa-adad-1a5202efabb7 nodeName:}" failed. No retries permitted until 2025-12-06 06:41:36.840530225 +0000 UTC m=+998.126282185 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0055dc6b-eac6-40aa-adad-1a5202efabb7-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb" (UID: "0055dc6b-eac6-40aa-adad-1a5202efabb7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 06:41:29 crc kubenswrapper[4823]: I1206 06:41:29.630193 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-webhook-certs\") pod \"openstack-operator-controller-manager-75cbb7bbf4-bcdjh\" (UID: \"78374f83-e964-486e-9590-b6bb562a5185\") " pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" Dec 06 06:41:29 crc kubenswrapper[4823]: E1206 06:41:29.630373 4823 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 06 06:41:29 crc kubenswrapper[4823]: E1206 06:41:29.630690 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-webhook-certs podName:78374f83-e964-486e-9590-b6bb562a5185 nodeName:}" failed. No retries permitted until 2025-12-06 06:41:37.630643092 +0000 UTC m=+998.916395052 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-webhook-certs") pod "openstack-operator-controller-manager-75cbb7bbf4-bcdjh" (UID: "78374f83-e964-486e-9590-b6bb562a5185") : secret "webhook-server-cert" not found Dec 06 06:41:29 crc kubenswrapper[4823]: E1206 06:41:29.630748 4823 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 06 06:41:29 crc kubenswrapper[4823]: E1206 06:41:29.630790 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-metrics-certs podName:78374f83-e964-486e-9590-b6bb562a5185 nodeName:}" failed. No retries permitted until 2025-12-06 06:41:37.630777746 +0000 UTC m=+998.916529706 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-metrics-certs") pod "openstack-operator-controller-manager-75cbb7bbf4-bcdjh" (UID: "78374f83-e964-486e-9590-b6bb562a5185") : secret "metrics-server-cert" not found Dec 06 06:41:29 crc kubenswrapper[4823]: I1206 06:41:29.630642 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-metrics-certs\") pod \"openstack-operator-controller-manager-75cbb7bbf4-bcdjh\" (UID: \"78374f83-e964-486e-9590-b6bb562a5185\") " pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" Dec 06 06:41:35 crc kubenswrapper[4823]: I1206 06:41:35.829152 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4001a5be-6496-49c2-971c-50723e76c864-cert\") pod \"infra-operator-controller-manager-57548d458d-7lkmh\" (UID: \"4001a5be-6496-49c2-971c-50723e76c864\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7lkmh" Dec 06 06:41:35 crc kubenswrapper[4823]: E1206 06:41:35.829395 4823 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 06 06:41:35 crc kubenswrapper[4823]: E1206 06:41:35.830307 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4001a5be-6496-49c2-971c-50723e76c864-cert podName:4001a5be-6496-49c2-971c-50723e76c864 nodeName:}" failed. No retries permitted until 2025-12-06 06:41:51.830283588 +0000 UTC m=+1013.116035618 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4001a5be-6496-49c2-971c-50723e76c864-cert") pod "infra-operator-controller-manager-57548d458d-7lkmh" (UID: "4001a5be-6496-49c2-971c-50723e76c864") : secret "infra-operator-webhook-server-cert" not found Dec 06 06:41:36 crc kubenswrapper[4823]: I1206 06:41:36.051595 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:41:36 crc kubenswrapper[4823]: I1206 06:41:36.051677 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:41:36 crc kubenswrapper[4823]: I1206 06:41:36.051742 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 06:41:36 crc kubenswrapper[4823]: I1206 06:41:36.052369 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2a1cf76af8a6f384ac47680b767c5129bfc1481da61050b03811147d1a619220"} pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 06:41:36 crc kubenswrapper[4823]: I1206 06:41:36.052424 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" containerID="cri-o://2a1cf76af8a6f384ac47680b767c5129bfc1481da61050b03811147d1a619220" gracePeriod=600 Dec 06 06:41:36 crc kubenswrapper[4823]: I1206 06:41:36.575269 4823 generic.go:334] "Generic (PLEG): container finished" podID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerID="2a1cf76af8a6f384ac47680b767c5129bfc1481da61050b03811147d1a619220" exitCode=0 Dec 06 06:41:36 crc kubenswrapper[4823]: I1206 06:41:36.575318 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerDied","Data":"2a1cf76af8a6f384ac47680b767c5129bfc1481da61050b03811147d1a619220"} Dec 06 06:41:36 crc kubenswrapper[4823]: I1206 06:41:36.575623 4823 scope.go:117] "RemoveContainer" containerID="5eadb100f9de392020e8ad9c0c80f79bb4ee89b08a0b99aaf32660b2052224b2" Dec 06 06:41:36 crc kubenswrapper[4823]: I1206 06:41:36.889522 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0055dc6b-eac6-40aa-adad-1a5202efabb7-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb\" (UID: \"0055dc6b-eac6-40aa-adad-1a5202efabb7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb" Dec 06 06:41:36 crc kubenswrapper[4823]: I1206 06:41:36.896222 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0055dc6b-eac6-40aa-adad-1a5202efabb7-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb\" (UID: \"0055dc6b-eac6-40aa-adad-1a5202efabb7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb" Dec 06 06:41:37 crc kubenswrapper[4823]: I1206 06:41:37.044776 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb" Dec 06 06:41:37 crc kubenswrapper[4823]: I1206 06:41:37.718620 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-metrics-certs\") pod \"openstack-operator-controller-manager-75cbb7bbf4-bcdjh\" (UID: \"78374f83-e964-486e-9590-b6bb562a5185\") " pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" Dec 06 06:41:37 crc kubenswrapper[4823]: E1206 06:41:37.718821 4823 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 06 06:41:37 crc kubenswrapper[4823]: E1206 06:41:37.719095 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-metrics-certs podName:78374f83-e964-486e-9590-b6bb562a5185 nodeName:}" failed. No retries permitted until 2025-12-06 06:41:53.719071687 +0000 UTC m=+1015.004823647 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-metrics-certs") pod "openstack-operator-controller-manager-75cbb7bbf4-bcdjh" (UID: "78374f83-e964-486e-9590-b6bb562a5185") : secret "metrics-server-cert" not found Dec 06 06:41:37 crc kubenswrapper[4823]: I1206 06:41:37.719570 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-webhook-certs\") pod \"openstack-operator-controller-manager-75cbb7bbf4-bcdjh\" (UID: \"78374f83-e964-486e-9590-b6bb562a5185\") " pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" Dec 06 06:41:37 crc kubenswrapper[4823]: E1206 06:41:37.719738 4823 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 06 06:41:37 crc kubenswrapper[4823]: E1206 06:41:37.719786 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-webhook-certs podName:78374f83-e964-486e-9590-b6bb562a5185 nodeName:}" failed. No retries permitted until 2025-12-06 06:41:53.719775097 +0000 UTC m=+1015.005527057 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-webhook-certs") pod "openstack-operator-controller-manager-75cbb7bbf4-bcdjh" (UID: "78374f83-e964-486e-9590-b6bb562a5185") : secret "webhook-server-cert" not found Dec 06 06:41:43 crc kubenswrapper[4823]: E1206 06:41:43.722368 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Dec 06 06:41:43 crc kubenswrapper[4823]: E1206 06:41:43.723053 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xnz6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-b45jg_openstack-operators(2c435a39-34e9-4d43-bff4-4f5d5a7f1275): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:41:51 crc kubenswrapper[4823]: I1206 06:41:51.882524 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4001a5be-6496-49c2-971c-50723e76c864-cert\") pod \"infra-operator-controller-manager-57548d458d-7lkmh\" (UID: \"4001a5be-6496-49c2-971c-50723e76c864\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7lkmh" Dec 06 06:41:51 crc kubenswrapper[4823]: I1206 06:41:51.898795 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4001a5be-6496-49c2-971c-50723e76c864-cert\") pod \"infra-operator-controller-manager-57548d458d-7lkmh\" (UID: \"4001a5be-6496-49c2-971c-50723e76c864\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7lkmh" Dec 06 06:41:52 crc kubenswrapper[4823]: I1206 06:41:52.153014 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-7lkmh" Dec 06 06:41:53 crc kubenswrapper[4823]: I1206 06:41:53.541452 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nvcjb"] Dec 06 06:41:53 crc kubenswrapper[4823]: I1206 06:41:53.543849 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nvcjb" Dec 06 06:41:53 crc kubenswrapper[4823]: I1206 06:41:53.555512 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nvcjb"] Dec 06 06:41:53 crc kubenswrapper[4823]: I1206 06:41:53.605813 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c87fe9ea-0538-45b9-bc58-630991438ad5-catalog-content\") pod \"community-operators-nvcjb\" (UID: \"c87fe9ea-0538-45b9-bc58-630991438ad5\") " pod="openshift-marketplace/community-operators-nvcjb" Dec 06 06:41:53 crc kubenswrapper[4823]: I1206 06:41:53.605902 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c87fe9ea-0538-45b9-bc58-630991438ad5-utilities\") pod \"community-operators-nvcjb\" (UID: \"c87fe9ea-0538-45b9-bc58-630991438ad5\") " pod="openshift-marketplace/community-operators-nvcjb" Dec 06 06:41:53 crc kubenswrapper[4823]: I1206 06:41:53.606009 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwmqk\" (UniqueName: \"kubernetes.io/projected/c87fe9ea-0538-45b9-bc58-630991438ad5-kube-api-access-fwmqk\") pod \"community-operators-nvcjb\" (UID: \"c87fe9ea-0538-45b9-bc58-630991438ad5\") " pod="openshift-marketplace/community-operators-nvcjb" Dec 06 06:41:53 crc kubenswrapper[4823]: I1206 06:41:53.707027 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c87fe9ea-0538-45b9-bc58-630991438ad5-catalog-content\") pod \"community-operators-nvcjb\" (UID: \"c87fe9ea-0538-45b9-bc58-630991438ad5\") " pod="openshift-marketplace/community-operators-nvcjb" Dec 06 06:41:53 crc kubenswrapper[4823]: I1206 06:41:53.707063 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c87fe9ea-0538-45b9-bc58-630991438ad5-utilities\") pod \"community-operators-nvcjb\" (UID: \"c87fe9ea-0538-45b9-bc58-630991438ad5\") " pod="openshift-marketplace/community-operators-nvcjb" Dec 06 06:41:53 crc kubenswrapper[4823]: I1206 06:41:53.707125 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwmqk\" (UniqueName: \"kubernetes.io/projected/c87fe9ea-0538-45b9-bc58-630991438ad5-kube-api-access-fwmqk\") pod \"community-operators-nvcjb\" (UID: \"c87fe9ea-0538-45b9-bc58-630991438ad5\") " pod="openshift-marketplace/community-operators-nvcjb" Dec 06 06:41:53 crc kubenswrapper[4823]: I1206 06:41:53.707613 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c87fe9ea-0538-45b9-bc58-630991438ad5-catalog-content\") pod \"community-operators-nvcjb\" (UID: \"c87fe9ea-0538-45b9-bc58-630991438ad5\") " pod="openshift-marketplace/community-operators-nvcjb" Dec 06 06:41:53 crc kubenswrapper[4823]: I1206 06:41:53.707645 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c87fe9ea-0538-45b9-bc58-630991438ad5-utilities\") pod \"community-operators-nvcjb\" (UID: \"c87fe9ea-0538-45b9-bc58-630991438ad5\") " pod="openshift-marketplace/community-operators-nvcjb" Dec 06 06:41:53 crc kubenswrapper[4823]: I1206 06:41:53.732602 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwmqk\" (UniqueName: \"kubernetes.io/projected/c87fe9ea-0538-45b9-bc58-630991438ad5-kube-api-access-fwmqk\") pod \"community-operators-nvcjb\" (UID: \"c87fe9ea-0538-45b9-bc58-630991438ad5\") " pod="openshift-marketplace/community-operators-nvcjb" Dec 06 06:41:53 crc kubenswrapper[4823]: I1206 06:41:53.808331 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-webhook-certs\") pod \"openstack-operator-controller-manager-75cbb7bbf4-bcdjh\" (UID: \"78374f83-e964-486e-9590-b6bb562a5185\") " pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" Dec 06 06:41:53 crc kubenswrapper[4823]: I1206 06:41:53.808453 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-metrics-certs\") pod \"openstack-operator-controller-manager-75cbb7bbf4-bcdjh\" (UID: \"78374f83-e964-486e-9590-b6bb562a5185\") " pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" Dec 06 06:41:53 crc kubenswrapper[4823]: I1206 06:41:53.824582 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-webhook-certs\") pod \"openstack-operator-controller-manager-75cbb7bbf4-bcdjh\" (UID: \"78374f83-e964-486e-9590-b6bb562a5185\") " pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" Dec 06 06:41:53 crc kubenswrapper[4823]: I1206 06:41:53.824581 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78374f83-e964-486e-9590-b6bb562a5185-metrics-certs\") pod \"openstack-operator-controller-manager-75cbb7bbf4-bcdjh\" (UID: \"78374f83-e964-486e-9590-b6bb562a5185\") " pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" Dec 06 06:41:53 crc kubenswrapper[4823]: I1206 06:41:53.838920 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" Dec 06 06:41:53 crc kubenswrapper[4823]: I1206 06:41:53.866479 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nvcjb" Dec 06 06:41:54 crc kubenswrapper[4823]: E1206 06:41:54.166819 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809" Dec 06 06:41:54 crc kubenswrapper[4823]: E1206 06:41:54.167127 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-478v2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987cd8cd-mwbpk_openstack-operators(9bc807b4-b176-4249-9610-b4c92f99fb0b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:41:58 crc kubenswrapper[4823]: E1206 06:41:58.548280 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5" Dec 06 06:41:58 crc kubenswrapper[4823]: E1206 06:41:58.549071 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xss97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-nggsj_openstack-operators(22c2c4cb-ba18-4f49-9986-9095779c93dc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:41:59 crc kubenswrapper[4823]: E1206 06:41:59.284839 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" Dec 06 06:41:59 crc kubenswrapper[4823]: E1206 06:41:59.285348 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4d6w7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-9mbh5_openstack-operators(d50c6d95-dbef-423c-8094-f8a1634d9b72): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:42:02 crc kubenswrapper[4823]: E1206 06:42:02.638246 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Dec 06 06:42:02 crc kubenswrapper[4823]: E1206 06:42:02.639992 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d45xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-ncd4b_openstack-operators(25eb7fcd-3634-4e2d-b2b3-2f15f9b0bfb4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:42:03 crc kubenswrapper[4823]: E1206 06:42:03.366831 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530" Dec 06 06:42:03 crc kubenswrapper[4823]: E1206 06:42:03.367765 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xbthq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6c548fd776-d4pqr_openstack-operators(e98ba71e-3a94-4c9e-b82a-e18dcb197cf9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:42:05 crc kubenswrapper[4823]: E1206 06:42:05.890218 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9" Dec 06 06:42:05 crc kubenswrapper[4823]: E1206 06:42:05.890938 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gdmhv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7c79b5df47-n56m7_openstack-operators(147b67a9-b422-48ba-b948-a1b42946ef1d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:42:06 crc kubenswrapper[4823]: E1206 06:42:06.921243 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d" Dec 06 06:42:06 crc kubenswrapper[4823]: E1206 06:42:06.921690 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lndmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-j45x4_openstack-operators(b7fb4033-737a-4492-a5fd-422532e0c693): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:42:07 crc kubenswrapper[4823]: E1206 06:42:07.775429 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7" Dec 06 06:42:07 crc kubenswrapper[4823]: E1206 06:42:07.775739 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n7prk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-56bbcc9d85-m9pvc_openstack-operators(03d20c66-aa09-43f5-848a-b352868fb3de): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:42:08 crc kubenswrapper[4823]: E1206 06:42:08.956123 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Dec 06 06:42:08 crc kubenswrapper[4823]: E1206 06:42:08.956374 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bxnhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-ggt2m_openstack-operators(424f7266-0185-4f27-9de3-1daf6a06dd2c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:42:09 crc kubenswrapper[4823]: E1206 06:42:09.032222 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/openstack-k8s-operators/watcher-operator:95c173aa1001d4ab3cfe7b9c2a43dd32b29c60cb" Dec 06 06:42:09 crc kubenswrapper[4823]: E1206 06:42:09.032279 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/openstack-k8s-operators/watcher-operator:95c173aa1001d4ab3cfe7b9c2a43dd32b29c60cb" Dec 06 06:42:09 crc kubenswrapper[4823]: E1206 06:42:09.032415 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.174:5001/openstack-k8s-operators/watcher-operator:95c173aa1001d4ab3cfe7b9c2a43dd32b29c60cb,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mmmfn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6dd68fb56b-hkzrc_openstack-operators(d98bfe02-e1d8-4bdf-a2e2-cf9a83964511): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:42:09 crc kubenswrapper[4823]: E1206 06:42:09.549471 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 06 06:42:09 crc kubenswrapper[4823]: E1206 06:42:09.550003 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ppc8q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-z92rh_openstack-operators(86d88b9b-a5a9-47e0-bfdb-381ef80693f3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:42:09 crc kubenswrapper[4823]: E1206 06:42:09.551208 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z92rh" podUID="86d88b9b-a5a9-47e0-bfdb-381ef80693f3" Dec 06 06:42:09 crc kubenswrapper[4823]: E1206 06:42:09.816105 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z92rh" podUID="86d88b9b-a5a9-47e0-bfdb-381ef80693f3" Dec 06 06:42:10 crc kubenswrapper[4823]: E1206 06:42:10.210434 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 06 06:42:10 crc kubenswrapper[4823]: E1206 06:42:10.210710 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cqql9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-z7czp_openstack-operators(cb125116-0c3b-4831-a05c-9076f5360e28): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:42:10 crc kubenswrapper[4823]: E1206 06:42:10.855101 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385" Dec 06 06:42:10 crc kubenswrapper[4823]: E1206 06:42:10.855304 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bl9lp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-nb62h_openstack-operators(18059fdc-d882-485f-9de3-0567bac485ba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:42:12 crc kubenswrapper[4823]: E1206 06:42:12.530900 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 06 06:42:12 crc kubenswrapper[4823]: E1206 06:42:12.531422 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l54rz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-jdgsz_openstack-operators(a72ff6fc-2086-4e96-9bc7-7298a0304e5e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:42:12 crc kubenswrapper[4823]: E1206 06:42:12.906831 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 06 06:42:12 crc kubenswrapper[4823]: E1206 06:42:12.906974 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rnvd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-xczmc_openshift-marketplace(6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 06:42:12 crc kubenswrapper[4823]: E1206 06:42:12.908549 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-xczmc" podUID="6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e" Dec 06 06:42:13 crc kubenswrapper[4823]: I1206 06:42:13.224969 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb"] Dec 06 06:42:13 crc kubenswrapper[4823]: I1206 06:42:13.372431 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nvcjb"] Dec 06 06:42:13 crc kubenswrapper[4823]: W1206 06:42:13.479455 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0055dc6b_eac6_40aa_adad_1a5202efabb7.slice/crio-5f01c4d7430cacf7a378685513b2e4b4a41a3d10918ad2113e9f1d86556c0e3a WatchSource:0}: Error finding container 5f01c4d7430cacf7a378685513b2e4b4a41a3d10918ad2113e9f1d86556c0e3a: Status 404 returned error can't find the container with id 5f01c4d7430cacf7a378685513b2e4b4a41a3d10918ad2113e9f1d86556c0e3a Dec 06 06:42:13 crc kubenswrapper[4823]: W1206 06:42:13.481056 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc87fe9ea_0538_45b9_bc58_630991438ad5.slice/crio-c5adf6a41276699f5d6190035b983f1b1c8d508e891f2be5614f018ad2612f50 WatchSource:0}: Error finding container c5adf6a41276699f5d6190035b983f1b1c8d508e891f2be5614f018ad2612f50: Status 404 returned error can't find the container with id c5adf6a41276699f5d6190035b983f1b1c8d508e891f2be5614f018ad2612f50 Dec 06 06:42:13 crc kubenswrapper[4823]: I1206 06:42:13.738494 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-7lkmh"] Dec 06 06:42:13 crc kubenswrapper[4823]: I1206 06:42:13.843331 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nvcjb" event={"ID":"c87fe9ea-0538-45b9-bc58-630991438ad5","Type":"ContainerStarted","Data":"c5adf6a41276699f5d6190035b983f1b1c8d508e891f2be5614f018ad2612f50"} Dec 06 06:42:13 crc kubenswrapper[4823]: I1206 06:42:13.846448 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerStarted","Data":"3a9115986422c421655f98d90d9af3c203435cfaca9c79b7b491e0d1286a3843"} Dec 06 06:42:13 crc kubenswrapper[4823]: I1206 06:42:13.847469 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb" event={"ID":"0055dc6b-eac6-40aa-adad-1a5202efabb7","Type":"ContainerStarted","Data":"5f01c4d7430cacf7a378685513b2e4b4a41a3d10918ad2113e9f1d86556c0e3a"} Dec 06 06:42:14 crc kubenswrapper[4823]: E1206 06:42:14.020964 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 06 06:42:14 crc kubenswrapper[4823]: E1206 06:42:14.021137 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-478v2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987cd8cd-mwbpk_openstack-operators(9bc807b4-b176-4249-9610-b4c92f99fb0b): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 06 06:42:14 crc kubenswrapper[4823]: E1206 06:42:14.022904 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-mwbpk" podUID="9bc807b4-b176-4249-9610-b4c92f99fb0b" Dec 06 06:42:14 crc kubenswrapper[4823]: W1206 06:42:14.614606 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4001a5be_6496_49c2_971c_50723e76c864.slice/crio-085d795c42de6a3a32b207651cd1c4bbe78ced21a66c4ed2e11094f0d8f16dd5 WatchSource:0}: Error finding container 085d795c42de6a3a32b207651cd1c4bbe78ced21a66c4ed2e11094f0d8f16dd5: Status 404 returned error can't find the container with id 085d795c42de6a3a32b207651cd1c4bbe78ced21a66c4ed2e11094f0d8f16dd5 Dec 06 06:42:14 crc kubenswrapper[4823]: E1206 06:42:14.621607 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 06 06:42:14 crc kubenswrapper[4823]: E1206 06:42:14.621868 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xpnhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-vmvpr_openstack-operators(433b05ca-a4e2-4e7f-96d2-53e6efb9efc7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:42:14 crc kubenswrapper[4823]: E1206 06:42:14.623398 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vmvpr" podUID="433b05ca-a4e2-4e7f-96d2-53e6efb9efc7" Dec 06 06:42:14 crc kubenswrapper[4823]: E1206 06:42:14.658126 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 06 06:42:14 crc kubenswrapper[4823]: E1206 06:42:14.658326 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xnz6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-b45jg_openstack-operators(2c435a39-34e9-4d43-bff4-4f5d5a7f1275): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 06 06:42:14 crc kubenswrapper[4823]: E1206 06:42:14.659566 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-b45jg" podUID="2c435a39-34e9-4d43-bff4-4f5d5a7f1275" Dec 06 06:42:14 crc kubenswrapper[4823]: I1206 06:42:14.858361 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-7lkmh" event={"ID":"4001a5be-6496-49c2-971c-50723e76c864","Type":"ContainerStarted","Data":"085d795c42de6a3a32b207651cd1c4bbe78ced21a66c4ed2e11094f0d8f16dd5"} Dec 06 06:42:14 crc kubenswrapper[4823]: I1206 06:42:14.963705 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh"] Dec 06 06:42:15 crc kubenswrapper[4823]: I1206 06:42:15.865694 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" event={"ID":"78374f83-e964-486e-9590-b6bb562a5185","Type":"ContainerStarted","Data":"9f7807ca98a59fc27eea4340bcdab93a4afd47ba4c8ebc6319d607214de33862"} Dec 06 06:42:16 crc kubenswrapper[4823]: I1206 06:42:16.874655 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-9r9sg" event={"ID":"69d7c5b3-6bb3-4545-bcf3-9613f979646d","Type":"ContainerStarted","Data":"ad0865c39f65a4779eeeb656f3a2bf489380d0e0f708eb5a9340e5475b43c5a7"} Dec 06 06:42:17 crc kubenswrapper[4823]: E1206 06:42:17.089496 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-d4pqr" podUID="e98ba71e-3a94-4c9e-b82a-e18dcb197cf9" Dec 06 06:42:17 crc kubenswrapper[4823]: E1206 06:42:17.093730 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-m9pvc" podUID="03d20c66-aa09-43f5-848a-b352868fb3de" Dec 06 06:42:17 crc kubenswrapper[4823]: E1206 06:42:17.541295 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ggt2m" podUID="424f7266-0185-4f27-9de3-1daf6a06dd2c" Dec 06 06:42:17 crc kubenswrapper[4823]: I1206 06:42:17.887771 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-4fdh4" event={"ID":"25f101a2-6154-43f7-b4ef-2679a4ebacc9","Type":"ContainerStarted","Data":"9dc68815815d9b8dd2ceb498a0d6c080a0eef705e0106ea7f3da5772539e7e6d"} Dec 06 06:42:17 crc kubenswrapper[4823]: I1206 06:42:17.889643 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-d4pqr" event={"ID":"e98ba71e-3a94-4c9e-b82a-e18dcb197cf9","Type":"ContainerStarted","Data":"8cca99342b32b0a864466f94bcfd5f6706c0cab69249637786fc62c1576db2e9"} Dec 06 06:42:17 crc kubenswrapper[4823]: I1206 06:42:17.892673 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ggt2m" event={"ID":"424f7266-0185-4f27-9de3-1daf6a06dd2c","Type":"ContainerStarted","Data":"9565e51f55ec717bd140673a23a239787c7b28ccf2af16e6b9801fc819e98999"} Dec 06 06:42:17 crc kubenswrapper[4823]: E1206 06:42:17.893787 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ggt2m" podUID="424f7266-0185-4f27-9de3-1daf6a06dd2c" Dec 06 06:42:17 crc kubenswrapper[4823]: I1206 06:42:17.895160 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4xsdc" event={"ID":"af7acc94-0229-4055-b0ea-e5646c927e7a","Type":"ContainerStarted","Data":"047ce790e0e898a61d0d6e7034435d47aecf7539e6b015fe496f08a1c7435a7c"} Dec 06 06:42:17 crc kubenswrapper[4823]: I1206 06:42:17.897390 4823 generic.go:334] "Generic (PLEG): container finished" podID="c87fe9ea-0538-45b9-bc58-630991438ad5" containerID="6a9c33f9d47188f089011433b2a6ec33fbcdf57bd9db2a82f9c96c28b49641f2" exitCode=0 Dec 06 06:42:17 crc kubenswrapper[4823]: I1206 06:42:17.897440 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nvcjb" event={"ID":"c87fe9ea-0538-45b9-bc58-630991438ad5","Type":"ContainerDied","Data":"6a9c33f9d47188f089011433b2a6ec33fbcdf57bd9db2a82f9c96c28b49641f2"} Dec 06 06:42:17 crc kubenswrapper[4823]: I1206 06:42:17.900005 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-5jsvb" event={"ID":"3ade4605-3b89-4c0e-a05c-b0d7d6ee66bf","Type":"ContainerStarted","Data":"08014fb82a699eb036a26fde04b5b4e47736881f94ec8a7136285157d42067de"} Dec 06 06:42:17 crc kubenswrapper[4823]: I1206 06:42:17.901328 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-m9pvc" event={"ID":"03d20c66-aa09-43f5-848a-b352868fb3de","Type":"ContainerStarted","Data":"e7def4581be03e1923e9c7f7b8ba9eb4ddc35e1cc17599e5852e5051cc82b7ee"} Dec 06 06:42:19 crc kubenswrapper[4823]: E1206 06:42:18.940983 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ggt2m" podUID="424f7266-0185-4f27-9de3-1daf6a06dd2c" Dec 06 06:42:22 crc kubenswrapper[4823]: E1206 06:42:22.320942 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jdgsz" podUID="a72ff6fc-2086-4e96-9bc7-7298a0304e5e" Dec 06 06:42:23 crc kubenswrapper[4823]: I1206 06:42:23.054954 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jdgsz" event={"ID":"a72ff6fc-2086-4e96-9bc7-7298a0304e5e","Type":"ContainerStarted","Data":"597ef9c16dea1475f0d141cd5a7de9d9d82564f592e2ca7ac58f0704d072f1e4"} Dec 06 06:42:26 crc kubenswrapper[4823]: I1206 06:42:26.113839 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" event={"ID":"78374f83-e964-486e-9590-b6bb562a5185","Type":"ContainerStarted","Data":"1dbdd602570adf22e311f1b8a67eda497ff8454767dec4f937584d080fa3a786"} Dec 06 06:42:26 crc kubenswrapper[4823]: I1206 06:42:26.114532 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" Dec 06 06:42:26 crc kubenswrapper[4823]: I1206 06:42:26.141557 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" podStartSLOduration=66.141536829 podStartE2EDuration="1m6.141536829s" podCreationTimestamp="2025-12-06 06:41:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:42:26.140349994 +0000 UTC m=+1047.426101954" watchObservedRunningTime="2025-12-06 06:42:26.141536829 +0000 UTC m=+1047.427288789" Dec 06 06:42:26 crc kubenswrapper[4823]: E1206 06:42:26.167309 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-xczmc" podUID="6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e" Dec 06 06:42:28 crc kubenswrapper[4823]: I1206 06:42:28.132714 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-d4pqr" event={"ID":"e98ba71e-3a94-4c9e-b82a-e18dcb197cf9","Type":"ContainerStarted","Data":"435e7fa39334e090ac2b63c6ad98b1e5ba0509a19c39346df7379779af90ab46"} Dec 06 06:42:28 crc kubenswrapper[4823]: I1206 06:42:28.136440 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4xsdc" event={"ID":"af7acc94-0229-4055-b0ea-e5646c927e7a","Type":"ContainerStarted","Data":"421d9f430c3370e4d57369786cc897c7a02a354f9280ab1ea0842747279acbf4"} Dec 06 06:42:28 crc kubenswrapper[4823]: I1206 06:42:28.137748 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd68fb56b-hkzrc" event={"ID":"d98bfe02-e1d8-4bdf-a2e2-cf9a83964511","Type":"ContainerStarted","Data":"705c878bc44b619430798a053ad9e923719b607fd22a6a7106498822bca54d32"} Dec 06 06:42:28 crc kubenswrapper[4823]: I1206 06:42:28.139778 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-b45jg" event={"ID":"2c435a39-34e9-4d43-bff4-4f5d5a7f1275","Type":"ContainerStarted","Data":"89d8ff32ae966f7d81518b97cca3584ab7cab8f37ee6abb8f3bfc07904324500"} Dec 06 06:42:28 crc kubenswrapper[4823]: E1206 06:42:28.510798 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-n56m7" podUID="147b67a9-b422-48ba-b948-a1b42946ef1d" Dec 06 06:42:28 crc kubenswrapper[4823]: E1206 06:42:28.512386 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-6dd68fb56b-hkzrc" podUID="d98bfe02-e1d8-4bdf-a2e2-cf9a83964511" Dec 06 06:42:28 crc kubenswrapper[4823]: E1206 06:42:28.910877 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-j45x4" podUID="b7fb4033-737a-4492-a5fd-422532e0c693" Dec 06 06:42:28 crc kubenswrapper[4823]: E1206 06:42:28.910999 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ncd4b" podUID="25eb7fcd-3634-4e2d-b2b3-2f15f9b0bfb4" Dec 06 06:42:28 crc kubenswrapper[4823]: E1206 06:42:28.913883 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-nggsj" podUID="22c2c4cb-ba18-4f49-9986-9095779c93dc" Dec 06 06:42:28 crc kubenswrapper[4823]: E1206 06:42:28.920693 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9mbh5" podUID="d50c6d95-dbef-423c-8094-f8a1634d9b72" Dec 06 06:42:28 crc kubenswrapper[4823]: E1206 06:42:28.921137 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nb62h" podUID="18059fdc-d882-485f-9de3-0567bac485ba" Dec 06 06:42:28 crc kubenswrapper[4823]: E1206 06:42:28.921253 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-z7czp" podUID="cb125116-0c3b-4831-a05c-9076f5360e28" Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.210602 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nvcjb" event={"ID":"c87fe9ea-0538-45b9-bc58-630991438ad5","Type":"ContainerStarted","Data":"3e7ac25b1fcee0050f60b23a34348bdc0ffb10a89ef9ddd79f62b0e532cef15e"} Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.213472 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jdgsz" event={"ID":"a72ff6fc-2086-4e96-9bc7-7298a0304e5e","Type":"ContainerStarted","Data":"4fea7d775d6634c5336d06637f69733d0720b5a4130d24de981186e4486ad43f"} Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.213968 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jdgsz" Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.216650 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vmvpr" event={"ID":"433b05ca-a4e2-4e7f-96d2-53e6efb9efc7","Type":"ContainerStarted","Data":"04d79f2ae11a0c8f88c63ca48fc42438402cd6f98f48f59cdf7ac3e766bb1bf9"} Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.217971 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-n56m7" event={"ID":"147b67a9-b422-48ba-b948-a1b42946ef1d","Type":"ContainerStarted","Data":"cfeef1f8f57e23c3bc789d6483acd13de9f78f822a7721a5b3d14171e9d016f5"} Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.230190 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-j45x4" event={"ID":"b7fb4033-737a-4492-a5fd-422532e0c693","Type":"ContainerStarted","Data":"0d89b27151163259a14c6789a97cd0a10984e9813813c9739d0a78777cd067b7"} Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.233674 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-nggsj" event={"ID":"22c2c4cb-ba18-4f49-9986-9095779c93dc","Type":"ContainerStarted","Data":"df3da07bdaa5fe35bcc74de1d305b59e989f3d5576e653a76a1519ce27cf32f6"} Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.251101 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-7lkmh" event={"ID":"4001a5be-6496-49c2-971c-50723e76c864","Type":"ContainerStarted","Data":"6e2d6fdf7705f5e998a149d1d7715297fc35a005c7ab0a610669bdab1a08a4e9"} Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.325921 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb" event={"ID":"0055dc6b-eac6-40aa-adad-1a5202efabb7","Type":"ContainerStarted","Data":"5e3d9ec7138056f4f1449b044d5ac0115bcc0c8ebb4d603e0b493988ea330fb7"} Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.352144 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-z7czp" event={"ID":"cb125116-0c3b-4831-a05c-9076f5360e28","Type":"ContainerStarted","Data":"78943e34bdb1e199fbe21734f3d913f4f48071081481053dc4e43b3b91211362"} Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.374973 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z92rh" event={"ID":"86d88b9b-a5a9-47e0-bfdb-381ef80693f3","Type":"ContainerStarted","Data":"4962c668eda0b998fcf966143e1c6939a132eb1feb8f6cf318a397c8fb733257"} Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.391503 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nb62h" event={"ID":"18059fdc-d882-485f-9de3-0567bac485ba","Type":"ContainerStarted","Data":"f8b7d441ef8cc830a85f017f6dc2b748e8bf2f4ce527a09f14dc5d6cd87549be"} Dec 06 06:42:29 crc kubenswrapper[4823]: E1206 06:42:29.393008 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nb62h" podUID="18059fdc-d882-485f-9de3-0567bac485ba" Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.482320 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-5jsvb" event={"ID":"3ade4605-3b89-4c0e-a05c-b0d7d6ee66bf","Type":"ContainerStarted","Data":"4c58fef447cdcfa7818a2ab76a9d5c700f86791dd089adbe6340e48e13663469"} Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.486104 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-5jsvb" Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.498150 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-m9pvc" event={"ID":"03d20c66-aa09-43f5-848a-b352868fb3de","Type":"ContainerStarted","Data":"3eb2405099d925fa927a001f56e938679bafc8a303eff9422eb27d95384078d9"} Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.499268 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-m9pvc" Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.513105 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-5jsvb" Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.518902 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-b45jg" event={"ID":"2c435a39-34e9-4d43-bff4-4f5d5a7f1275","Type":"ContainerStarted","Data":"7560c9167c5cd8ca0a6bbe08d723621d9ba815fd000016c57485f292c8e30777"} Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.519952 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-b45jg" Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.549969 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9mbh5" event={"ID":"d50c6d95-dbef-423c-8094-f8a1634d9b72","Type":"ContainerStarted","Data":"4def515c629c4ed368892330e093dd0b79e5feee064058d0e5bec1d9a54b994c"} Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.709609 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-4fdh4" event={"ID":"25f101a2-6154-43f7-b4ef-2679a4ebacc9","Type":"ContainerStarted","Data":"54c48a8f81433c42f99b2966bb2b6fcfa627dac97fb88a44b2bae4aa47c7c67b"} Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.710805 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-4fdh4" Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.721288 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ncd4b" event={"ID":"25eb7fcd-3634-4e2d-b2b3-2f15f9b0bfb4","Type":"ContainerStarted","Data":"98cb61fdd57a96c2ec155a32a8d67b0257c090cd33f34224a3169e8c7aff5811"} Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.759359 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-4fdh4" Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.759723 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-9r9sg" event={"ID":"69d7c5b3-6bb3-4545-bcf3-9613f979646d","Type":"ContainerStarted","Data":"9ffe07ea81dfce0ec00b05ee9a809443e912b187221b6838e32bc37222559096"} Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.761197 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-9r9sg" Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.771077 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-9r9sg" Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.775045 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-mwbpk" event={"ID":"9bc807b4-b176-4249-9610-b4c92f99fb0b","Type":"ContainerStarted","Data":"ec6dbc5b128184b1086b9692182f8dae1ee2f882b7983d1bfb066c0350f3a21a"} Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.775094 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-d4pqr" Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.776157 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4xsdc" Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.789145 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4xsdc" Dec 06 06:42:29 crc kubenswrapper[4823]: I1206 06:42:29.867910 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jdgsz" podStartSLOduration=8.164135337 podStartE2EDuration="1m9.867888401s" podCreationTimestamp="2025-12-06 06:41:20 +0000 UTC" firstStartedPulling="2025-12-06 06:41:24.107951742 +0000 UTC m=+985.393703702" lastFinishedPulling="2025-12-06 06:42:25.811704806 +0000 UTC m=+1047.097456766" observedRunningTime="2025-12-06 06:42:29.856154321 +0000 UTC m=+1051.141906301" watchObservedRunningTime="2025-12-06 06:42:29.867888401 +0000 UTC m=+1051.153640361" Dec 06 06:42:30 crc kubenswrapper[4823]: I1206 06:42:30.135477 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4xsdc" podStartSLOduration=25.574760761 podStartE2EDuration="1m11.135458728s" podCreationTimestamp="2025-12-06 06:41:19 +0000 UTC" firstStartedPulling="2025-12-06 06:41:23.538524263 +0000 UTC m=+984.824276223" lastFinishedPulling="2025-12-06 06:42:09.09922224 +0000 UTC m=+1030.384974190" observedRunningTime="2025-12-06 06:42:30.134444469 +0000 UTC m=+1051.420196429" watchObservedRunningTime="2025-12-06 06:42:30.135458728 +0000 UTC m=+1051.421210688" Dec 06 06:42:30 crc kubenswrapper[4823]: I1206 06:42:30.164947 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-d4pqr" podStartSLOduration=10.550256521 podStartE2EDuration="1m11.164931933s" podCreationTimestamp="2025-12-06 06:41:19 +0000 UTC" firstStartedPulling="2025-12-06 06:41:24.609979086 +0000 UTC m=+985.895731046" lastFinishedPulling="2025-12-06 06:42:25.224654498 +0000 UTC m=+1046.510406458" observedRunningTime="2025-12-06 06:42:30.162637926 +0000 UTC m=+1051.448389886" watchObservedRunningTime="2025-12-06 06:42:30.164931933 +0000 UTC m=+1051.450683893" Dec 06 06:42:30 crc kubenswrapper[4823]: I1206 06:42:30.198350 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-4fdh4" podStartSLOduration=25.206221797 podStartE2EDuration="1m11.198323281s" podCreationTimestamp="2025-12-06 06:41:19 +0000 UTC" firstStartedPulling="2025-12-06 06:41:23.538136602 +0000 UTC m=+984.823888572" lastFinishedPulling="2025-12-06 06:42:09.530238086 +0000 UTC m=+1030.815990056" observedRunningTime="2025-12-06 06:42:30.193025837 +0000 UTC m=+1051.478777797" watchObservedRunningTime="2025-12-06 06:42:30.198323281 +0000 UTC m=+1051.484075241" Dec 06 06:42:30 crc kubenswrapper[4823]: I1206 06:42:30.684621 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-b45jg" podStartSLOduration=9.64955421 podStartE2EDuration="1m10.684595989s" podCreationTimestamp="2025-12-06 06:41:20 +0000 UTC" firstStartedPulling="2025-12-06 06:41:24.190833274 +0000 UTC m=+985.476585234" lastFinishedPulling="2025-12-06 06:42:25.225875053 +0000 UTC m=+1046.511627013" observedRunningTime="2025-12-06 06:42:30.414402745 +0000 UTC m=+1051.700154705" watchObservedRunningTime="2025-12-06 06:42:30.684595989 +0000 UTC m=+1051.970347949" Dec 06 06:42:30 crc kubenswrapper[4823]: I1206 06:42:30.758586 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-9r9sg" podStartSLOduration=25.806636663 podStartE2EDuration="1m11.758559363s" podCreationTimestamp="2025-12-06 06:41:19 +0000 UTC" firstStartedPulling="2025-12-06 06:41:23.578268725 +0000 UTC m=+984.864020685" lastFinishedPulling="2025-12-06 06:42:09.530191425 +0000 UTC m=+1030.815943385" observedRunningTime="2025-12-06 06:42:30.667252776 +0000 UTC m=+1051.953004756" watchObservedRunningTime="2025-12-06 06:42:30.758559363 +0000 UTC m=+1052.044311323" Dec 06 06:42:30 crc kubenswrapper[4823]: I1206 06:42:30.897281 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-5jsvb" podStartSLOduration=24.810794662 podStartE2EDuration="1m11.897254944s" podCreationTimestamp="2025-12-06 06:41:19 +0000 UTC" firstStartedPulling="2025-12-06 06:41:22.013990484 +0000 UTC m=+983.299742444" lastFinishedPulling="2025-12-06 06:42:09.100450766 +0000 UTC m=+1030.386202726" observedRunningTime="2025-12-06 06:42:30.829715586 +0000 UTC m=+1052.115467556" watchObservedRunningTime="2025-12-06 06:42:30.897254944 +0000 UTC m=+1052.183006924" Dec 06 06:42:30 crc kubenswrapper[4823]: I1206 06:42:30.905762 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-7lkmh" event={"ID":"4001a5be-6496-49c2-971c-50723e76c864","Type":"ContainerStarted","Data":"df721a024b2ab8685c6583ae769e6ea83c44d59f0c92b4978eb3ca237c1b05b2"} Dec 06 06:42:30 crc kubenswrapper[4823]: I1206 06:42:30.906008 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-7lkmh" Dec 06 06:42:30 crc kubenswrapper[4823]: I1206 06:42:30.906316 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z92rh" podStartSLOduration=9.799350193 podStartE2EDuration="1m10.906293866s" podCreationTimestamp="2025-12-06 06:41:20 +0000 UTC" firstStartedPulling="2025-12-06 06:41:24.700423708 +0000 UTC m=+985.986175668" lastFinishedPulling="2025-12-06 06:42:25.807367381 +0000 UTC m=+1047.093119341" observedRunningTime="2025-12-06 06:42:30.893577627 +0000 UTC m=+1052.179329607" watchObservedRunningTime="2025-12-06 06:42:30.906293866 +0000 UTC m=+1052.192045836" Dec 06 06:42:30 crc kubenswrapper[4823]: I1206 06:42:30.909425 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb" event={"ID":"0055dc6b-eac6-40aa-adad-1a5202efabb7","Type":"ContainerStarted","Data":"421a3918c321dcfd661623342497c950fc01f98a50b3f85e626c7f1c211bd8f6"} Dec 06 06:42:30 crc kubenswrapper[4823]: I1206 06:42:30.910046 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb" Dec 06 06:42:30 crc kubenswrapper[4823]: I1206 06:42:30.916549 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-mwbpk" event={"ID":"9bc807b4-b176-4249-9610-b4c92f99fb0b","Type":"ContainerStarted","Data":"db31aae6ba158326527885e8bab75fd802a9138ebfb9678c8cf4de284c345f62"} Dec 06 06:42:30 crc kubenswrapper[4823]: I1206 06:42:30.917246 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-mwbpk" Dec 06 06:42:30 crc kubenswrapper[4823]: I1206 06:42:30.923490 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vmvpr" event={"ID":"433b05ca-a4e2-4e7f-96d2-53e6efb9efc7","Type":"ContainerStarted","Data":"dfa0776191cf58efdbccf3d0062ee4b01eebece5a5f40f05ac4733e392a1a819"} Dec 06 06:42:30 crc kubenswrapper[4823]: I1206 06:42:30.972232 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-m9pvc" podStartSLOduration=9.758317114 podStartE2EDuration="1m10.972211717s" podCreationTimestamp="2025-12-06 06:41:20 +0000 UTC" firstStartedPulling="2025-12-06 06:41:24.597945167 +0000 UTC m=+985.883697127" lastFinishedPulling="2025-12-06 06:42:25.81183975 +0000 UTC m=+1047.097591730" observedRunningTime="2025-12-06 06:42:30.931922789 +0000 UTC m=+1052.217674759" watchObservedRunningTime="2025-12-06 06:42:30.972211717 +0000 UTC m=+1052.257963677" Dec 06 06:42:31 crc kubenswrapper[4823]: I1206 06:42:31.034238 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb" podStartSLOduration=59.290580851 podStartE2EDuration="1m11.034195674s" podCreationTimestamp="2025-12-06 06:41:20 +0000 UTC" firstStartedPulling="2025-12-06 06:42:13.483004252 +0000 UTC m=+1034.768756222" lastFinishedPulling="2025-12-06 06:42:25.226619085 +0000 UTC m=+1046.512371045" observedRunningTime="2025-12-06 06:42:31.027999244 +0000 UTC m=+1052.313751204" watchObservedRunningTime="2025-12-06 06:42:31.034195674 +0000 UTC m=+1052.319947634" Dec 06 06:42:31 crc kubenswrapper[4823]: I1206 06:42:31.045762 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-7lkmh" podStartSLOduration=60.857377204 podStartE2EDuration="1m12.045730408s" podCreationTimestamp="2025-12-06 06:41:19 +0000 UTC" firstStartedPulling="2025-12-06 06:42:14.62223813 +0000 UTC m=+1035.907990080" lastFinishedPulling="2025-12-06 06:42:25.810591324 +0000 UTC m=+1047.096343284" observedRunningTime="2025-12-06 06:42:30.987810129 +0000 UTC m=+1052.273562089" watchObservedRunningTime="2025-12-06 06:42:31.045730408 +0000 UTC m=+1052.331482388" Dec 06 06:42:31 crc kubenswrapper[4823]: I1206 06:42:31.072188 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-mwbpk" podStartSLOduration=9.19809452 podStartE2EDuration="1m12.072168315s" podCreationTimestamp="2025-12-06 06:41:19 +0000 UTC" firstStartedPulling="2025-12-06 06:41:22.278626757 +0000 UTC m=+983.564378727" lastFinishedPulling="2025-12-06 06:42:25.152700562 +0000 UTC m=+1046.438452522" observedRunningTime="2025-12-06 06:42:31.065944194 +0000 UTC m=+1052.351696174" watchObservedRunningTime="2025-12-06 06:42:31.072168315 +0000 UTC m=+1052.357920275" Dec 06 06:42:31 crc kubenswrapper[4823]: I1206 06:42:31.653861 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-pw4mq" podUID="eb5ef3cd-9337-4665-945a-403b2619c53d" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 06:42:32 crc kubenswrapper[4823]: I1206 06:42:32.011788 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vmvpr" Dec 06 06:42:32 crc kubenswrapper[4823]: I1206 06:42:32.029805 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vmvpr" podStartSLOduration=11.156729425 podStartE2EDuration="1m12.029777927s" podCreationTimestamp="2025-12-06 06:41:20 +0000 UTC" firstStartedPulling="2025-12-06 06:41:24.93713133 +0000 UTC m=+986.222883290" lastFinishedPulling="2025-12-06 06:42:25.810179832 +0000 UTC m=+1047.095931792" observedRunningTime="2025-12-06 06:42:32.019910151 +0000 UTC m=+1053.305662111" watchObservedRunningTime="2025-12-06 06:42:32.029777927 +0000 UTC m=+1053.315529887" Dec 06 06:42:32 crc kubenswrapper[4823]: I1206 06:42:32.962913 4823 generic.go:334] "Generic (PLEG): container finished" podID="c87fe9ea-0538-45b9-bc58-630991438ad5" containerID="3e7ac25b1fcee0050f60b23a34348bdc0ffb10a89ef9ddd79f62b0e532cef15e" exitCode=0 Dec 06 06:42:32 crc kubenswrapper[4823]: I1206 06:42:32.963129 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nvcjb" event={"ID":"c87fe9ea-0538-45b9-bc58-630991438ad5","Type":"ContainerDied","Data":"3e7ac25b1fcee0050f60b23a34348bdc0ffb10a89ef9ddd79f62b0e532cef15e"} Dec 06 06:42:33 crc kubenswrapper[4823]: I1206 06:42:33.845503 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-75cbb7bbf4-bcdjh" Dec 06 06:42:36 crc kubenswrapper[4823]: I1206 06:42:36.021078 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9mbh5" event={"ID":"d50c6d95-dbef-423c-8094-f8a1634d9b72","Type":"ContainerStarted","Data":"bb6069b43128ef83fce4409cff84c32281443eeb06ca20c94879dd05396c2a17"} Dec 06 06:42:36 crc kubenswrapper[4823]: I1206 06:42:36.021583 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9mbh5" Dec 06 06:42:36 crc kubenswrapper[4823]: I1206 06:42:36.023817 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-z7czp" event={"ID":"cb125116-0c3b-4831-a05c-9076f5360e28","Type":"ContainerStarted","Data":"1a3a24f662fd6e830319e7dd68e7440fd1b39319f2c1b092a6af5130c7db0a85"} Dec 06 06:42:36 crc kubenswrapper[4823]: I1206 06:42:36.023883 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-z7czp" Dec 06 06:42:36 crc kubenswrapper[4823]: I1206 06:42:36.025784 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nvcjb" event={"ID":"c87fe9ea-0538-45b9-bc58-630991438ad5","Type":"ContainerStarted","Data":"9a0a6e5ab7b3319e5195b77a49c88c8e85cacf6fe78bb1ce975a97bb19cfd322"} Dec 06 06:42:36 crc kubenswrapper[4823]: I1206 06:42:36.028021 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd68fb56b-hkzrc" event={"ID":"d98bfe02-e1d8-4bdf-a2e2-cf9a83964511","Type":"ContainerStarted","Data":"bc53c5f470721a005107ec4ca531260e1a67137c6dfde18040ef060ae8ffde2d"} Dec 06 06:42:36 crc kubenswrapper[4823]: I1206 06:42:36.028336 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6dd68fb56b-hkzrc" Dec 06 06:42:36 crc kubenswrapper[4823]: I1206 06:42:36.030896 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-n56m7" event={"ID":"147b67a9-b422-48ba-b948-a1b42946ef1d","Type":"ContainerStarted","Data":"b6d1bdf945168b30987a6edf1060a1e8034e404d340435d43c6e5a53124e341c"} Dec 06 06:42:36 crc kubenswrapper[4823]: I1206 06:42:36.031016 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-n56m7" Dec 06 06:42:36 crc kubenswrapper[4823]: I1206 06:42:36.033150 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-j45x4" event={"ID":"b7fb4033-737a-4492-a5fd-422532e0c693","Type":"ContainerStarted","Data":"2a12cfa4857adb73d64756649fb71011f8cfd59105416904b4df10e6ad2ce34f"} Dec 06 06:42:36 crc kubenswrapper[4823]: I1206 06:42:36.033299 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-j45x4" Dec 06 06:42:36 crc kubenswrapper[4823]: I1206 06:42:36.035385 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-nggsj" event={"ID":"22c2c4cb-ba18-4f49-9986-9095779c93dc","Type":"ContainerStarted","Data":"1bd3ae8c119711261175832856acacfc5bdb8fe31d15d35447ee564fed35ddf9"} Dec 06 06:42:36 crc kubenswrapper[4823]: I1206 06:42:36.035509 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-nggsj" Dec 06 06:42:36 crc kubenswrapper[4823]: I1206 06:42:36.037334 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ncd4b" event={"ID":"25eb7fcd-3634-4e2d-b2b3-2f15f9b0bfb4","Type":"ContainerStarted","Data":"a74ed7ac86422ad1ecc5fa5670fb1a5dc3f0ddd5a74b16fc1477384aa12ff5ce"} Dec 06 06:42:36 crc kubenswrapper[4823]: I1206 06:42:36.037367 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ncd4b" Dec 06 06:42:36 crc kubenswrapper[4823]: I1206 06:42:36.039722 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ggt2m" event={"ID":"424f7266-0185-4f27-9de3-1daf6a06dd2c","Type":"ContainerStarted","Data":"625db429b78c2205c0f48938794966361d9eec751908c23a0e125afabbb90dc0"} Dec 06 06:42:36 crc kubenswrapper[4823]: I1206 06:42:36.040615 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ggt2m" Dec 06 06:42:36 crc kubenswrapper[4823]: I1206 06:42:36.059707 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9mbh5" podStartSLOduration=5.690060539 podStartE2EDuration="1m16.05967638s" podCreationTimestamp="2025-12-06 06:41:20 +0000 UTC" firstStartedPulling="2025-12-06 06:41:24.693826707 +0000 UTC m=+985.979578667" lastFinishedPulling="2025-12-06 06:42:35.063442548 +0000 UTC m=+1056.349194508" observedRunningTime="2025-12-06 06:42:36.051169094 +0000 UTC m=+1057.336921084" watchObservedRunningTime="2025-12-06 06:42:36.05967638 +0000 UTC m=+1057.345428360" Dec 06 06:42:36 crc kubenswrapper[4823]: I1206 06:42:36.079339 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6dd68fb56b-hkzrc" podStartSLOduration=6.843389556 podStartE2EDuration="1m16.079315579s" podCreationTimestamp="2025-12-06 06:41:20 +0000 UTC" firstStartedPulling="2025-12-06 06:41:24.779108199 +0000 UTC m=+986.064860159" lastFinishedPulling="2025-12-06 06:42:34.015034222 +0000 UTC m=+1055.300786182" observedRunningTime="2025-12-06 06:42:36.076197859 +0000 UTC m=+1057.361949839" watchObservedRunningTime="2025-12-06 06:42:36.079315579 +0000 UTC m=+1057.365067539" Dec 06 06:42:36 crc kubenswrapper[4823]: I1206 06:42:36.105974 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ggt2m" podStartSLOduration=5.515729774 podStartE2EDuration="1m16.105956762s" podCreationTimestamp="2025-12-06 06:41:20 +0000 UTC" firstStartedPulling="2025-12-06 06:41:24.466468725 +0000 UTC m=+985.752220685" lastFinishedPulling="2025-12-06 06:42:35.056695713 +0000 UTC m=+1056.342447673" observedRunningTime="2025-12-06 06:42:36.10243804 +0000 UTC m=+1057.388190000" watchObservedRunningTime="2025-12-06 06:42:36.105956762 +0000 UTC m=+1057.391708722" Dec 06 06:42:36 crc kubenswrapper[4823]: I1206 06:42:36.452834 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ncd4b" podStartSLOduration=6.519223818 podStartE2EDuration="1m16.452812938s" podCreationTimestamp="2025-12-06 06:41:20 +0000 UTC" firstStartedPulling="2025-12-06 06:41:24.663520518 +0000 UTC m=+985.949272478" lastFinishedPulling="2025-12-06 06:42:34.597109638 +0000 UTC m=+1055.882861598" observedRunningTime="2025-12-06 06:42:36.451740327 +0000 UTC m=+1057.737492287" watchObservedRunningTime="2025-12-06 06:42:36.452812938 +0000 UTC m=+1057.738564898" Dec 06 06:42:36 crc kubenswrapper[4823]: I1206 06:42:36.457808 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-nggsj" podStartSLOduration=7.576050586 podStartE2EDuration="1m17.457787092s" podCreationTimestamp="2025-12-06 06:41:19 +0000 UTC" firstStartedPulling="2025-12-06 06:41:24.134857262 +0000 UTC m=+985.420609222" lastFinishedPulling="2025-12-06 06:42:34.016593768 +0000 UTC m=+1055.302345728" observedRunningTime="2025-12-06 06:42:36.418411591 +0000 UTC m=+1057.704163551" watchObservedRunningTime="2025-12-06 06:42:36.457787092 +0000 UTC m=+1057.743539052" Dec 06 06:42:36 crc kubenswrapper[4823]: I1206 06:42:36.483586 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-j45x4" podStartSLOduration=7.243765514 podStartE2EDuration="1m16.48356547s" podCreationTimestamp="2025-12-06 06:41:20 +0000 UTC" firstStartedPulling="2025-12-06 06:41:24.77914333 +0000 UTC m=+986.064895290" lastFinishedPulling="2025-12-06 06:42:34.018943286 +0000 UTC m=+1055.304695246" observedRunningTime="2025-12-06 06:42:36.480903792 +0000 UTC m=+1057.766655752" watchObservedRunningTime="2025-12-06 06:42:36.48356547 +0000 UTC m=+1057.769317430" Dec 06 06:42:36 crc kubenswrapper[4823]: I1206 06:42:36.700078 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-z7czp" podStartSLOduration=7.728464664 podStartE2EDuration="1m17.700057835s" podCreationTimestamp="2025-12-06 06:41:19 +0000 UTC" firstStartedPulling="2025-12-06 06:41:24.607919496 +0000 UTC m=+985.893671456" lastFinishedPulling="2025-12-06 06:42:34.579512667 +0000 UTC m=+1055.865264627" observedRunningTime="2025-12-06 06:42:36.698271313 +0000 UTC m=+1057.984023293" watchObservedRunningTime="2025-12-06 06:42:36.700057835 +0000 UTC m=+1057.985809805" Dec 06 06:42:37 crc kubenswrapper[4823]: I1206 06:42:37.042857 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-n56m7" podStartSLOduration=8.809211357 podStartE2EDuration="1m18.042831933s" podCreationTimestamp="2025-12-06 06:41:19 +0000 UTC" firstStartedPulling="2025-12-06 06:41:24.783020513 +0000 UTC m=+986.068772473" lastFinishedPulling="2025-12-06 06:42:34.016641089 +0000 UTC m=+1055.302393049" observedRunningTime="2025-12-06 06:42:37.037159208 +0000 UTC m=+1058.322911178" watchObservedRunningTime="2025-12-06 06:42:37.042831933 +0000 UTC m=+1058.328583903" Dec 06 06:42:37 crc kubenswrapper[4823]: I1206 06:42:37.054284 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb" Dec 06 06:42:37 crc kubenswrapper[4823]: I1206 06:42:37.088757 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nvcjb" podStartSLOduration=30.085709746 podStartE2EDuration="44.088732413s" podCreationTimestamp="2025-12-06 06:41:53 +0000 UTC" firstStartedPulling="2025-12-06 06:42:21.052603315 +0000 UTC m=+1042.338355265" lastFinishedPulling="2025-12-06 06:42:35.055625972 +0000 UTC m=+1056.341377932" observedRunningTime="2025-12-06 06:42:37.066043296 +0000 UTC m=+1058.351795266" watchObservedRunningTime="2025-12-06 06:42:37.088732413 +0000 UTC m=+1058.374484373" Dec 06 06:42:40 crc kubenswrapper[4823]: I1206 06:42:40.181403 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-mwbpk" Dec 06 06:42:40 crc kubenswrapper[4823]: I1206 06:42:40.320389 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-nggsj" Dec 06 06:42:40 crc kubenswrapper[4823]: I1206 06:42:40.517460 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-d4pqr" Dec 06 06:42:40 crc kubenswrapper[4823]: I1206 06:42:40.637407 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-n56m7" Dec 06 06:42:40 crc kubenswrapper[4823]: I1206 06:42:40.709523 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-m9pvc" Dec 06 06:42:40 crc kubenswrapper[4823]: I1206 06:42:40.794848 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jdgsz" Dec 06 06:42:40 crc kubenswrapper[4823]: I1206 06:42:40.795324 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-z7czp" Dec 06 06:42:40 crc kubenswrapper[4823]: I1206 06:42:40.801753 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-b45jg" Dec 06 06:42:41 crc kubenswrapper[4823]: I1206 06:42:41.012597 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ggt2m" Dec 06 06:42:41 crc kubenswrapper[4823]: I1206 06:42:41.071968 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ncd4b" Dec 06 06:42:41 crc kubenswrapper[4823]: I1206 06:42:41.159917 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-j45x4" Dec 06 06:42:41 crc kubenswrapper[4823]: I1206 06:42:41.918378 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9mbh5" Dec 06 06:42:41 crc kubenswrapper[4823]: I1206 06:42:41.984796 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vmvpr" Dec 06 06:42:42 crc kubenswrapper[4823]: I1206 06:42:42.006494 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6dd68fb56b-hkzrc" Dec 06 06:42:42 crc kubenswrapper[4823]: I1206 06:42:42.163197 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-7lkmh" Dec 06 06:42:43 crc kubenswrapper[4823]: I1206 06:42:43.866918 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nvcjb" Dec 06 06:42:43 crc kubenswrapper[4823]: I1206 06:42:43.866985 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nvcjb" Dec 06 06:42:43 crc kubenswrapper[4823]: I1206 06:42:43.909284 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nvcjb" Dec 06 06:42:44 crc kubenswrapper[4823]: I1206 06:42:44.228185 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nvcjb" Dec 06 06:42:44 crc kubenswrapper[4823]: I1206 06:42:44.276081 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nvcjb"] Dec 06 06:42:46 crc kubenswrapper[4823]: I1206 06:42:46.202843 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nvcjb" podUID="c87fe9ea-0538-45b9-bc58-630991438ad5" containerName="registry-server" containerID="cri-o://9a0a6e5ab7b3319e5195b77a49c88c8e85cacf6fe78bb1ce975a97bb19cfd322" gracePeriod=2 Dec 06 06:42:46 crc kubenswrapper[4823]: I1206 06:42:46.728906 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nvcjb" Dec 06 06:42:46 crc kubenswrapper[4823]: I1206 06:42:46.810650 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwmqk\" (UniqueName: \"kubernetes.io/projected/c87fe9ea-0538-45b9-bc58-630991438ad5-kube-api-access-fwmqk\") pod \"c87fe9ea-0538-45b9-bc58-630991438ad5\" (UID: \"c87fe9ea-0538-45b9-bc58-630991438ad5\") " Dec 06 06:42:46 crc kubenswrapper[4823]: I1206 06:42:46.810802 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c87fe9ea-0538-45b9-bc58-630991438ad5-catalog-content\") pod \"c87fe9ea-0538-45b9-bc58-630991438ad5\" (UID: \"c87fe9ea-0538-45b9-bc58-630991438ad5\") " Dec 06 06:42:46 crc kubenswrapper[4823]: I1206 06:42:46.810844 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c87fe9ea-0538-45b9-bc58-630991438ad5-utilities\") pod \"c87fe9ea-0538-45b9-bc58-630991438ad5\" (UID: \"c87fe9ea-0538-45b9-bc58-630991438ad5\") " Dec 06 06:42:46 crc kubenswrapper[4823]: I1206 06:42:46.812168 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c87fe9ea-0538-45b9-bc58-630991438ad5-utilities" (OuterVolumeSpecName: "utilities") pod "c87fe9ea-0538-45b9-bc58-630991438ad5" (UID: "c87fe9ea-0538-45b9-bc58-630991438ad5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:42:46 crc kubenswrapper[4823]: I1206 06:42:46.821921 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c87fe9ea-0538-45b9-bc58-630991438ad5-kube-api-access-fwmqk" (OuterVolumeSpecName: "kube-api-access-fwmqk") pod "c87fe9ea-0538-45b9-bc58-630991438ad5" (UID: "c87fe9ea-0538-45b9-bc58-630991438ad5"). InnerVolumeSpecName "kube-api-access-fwmqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:42:46 crc kubenswrapper[4823]: I1206 06:42:46.868128 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c87fe9ea-0538-45b9-bc58-630991438ad5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c87fe9ea-0538-45b9-bc58-630991438ad5" (UID: "c87fe9ea-0538-45b9-bc58-630991438ad5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:42:46 crc kubenswrapper[4823]: I1206 06:42:46.912949 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c87fe9ea-0538-45b9-bc58-630991438ad5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:42:46 crc kubenswrapper[4823]: I1206 06:42:46.912996 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c87fe9ea-0538-45b9-bc58-630991438ad5-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:42:46 crc kubenswrapper[4823]: I1206 06:42:46.913011 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwmqk\" (UniqueName: \"kubernetes.io/projected/c87fe9ea-0538-45b9-bc58-630991438ad5-kube-api-access-fwmqk\") on node \"crc\" DevicePath \"\"" Dec 06 06:42:47 crc kubenswrapper[4823]: I1206 06:42:47.212702 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nb62h" event={"ID":"18059fdc-d882-485f-9de3-0567bac485ba","Type":"ContainerStarted","Data":"8e153d96435abeb4846c07e59c0a3c30f1d9ef1575659b65c13d02813d90a856"} Dec 06 06:42:47 crc kubenswrapper[4823]: I1206 06:42:47.214205 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nb62h" Dec 06 06:42:47 crc kubenswrapper[4823]: I1206 06:42:47.217511 4823 generic.go:334] "Generic (PLEG): container finished" podID="c87fe9ea-0538-45b9-bc58-630991438ad5" containerID="9a0a6e5ab7b3319e5195b77a49c88c8e85cacf6fe78bb1ce975a97bb19cfd322" exitCode=0 Dec 06 06:42:47 crc kubenswrapper[4823]: I1206 06:42:47.217707 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nvcjb" event={"ID":"c87fe9ea-0538-45b9-bc58-630991438ad5","Type":"ContainerDied","Data":"9a0a6e5ab7b3319e5195b77a49c88c8e85cacf6fe78bb1ce975a97bb19cfd322"} Dec 06 06:42:47 crc kubenswrapper[4823]: I1206 06:42:47.217774 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nvcjb" event={"ID":"c87fe9ea-0538-45b9-bc58-630991438ad5","Type":"ContainerDied","Data":"c5adf6a41276699f5d6190035b983f1b1c8d508e891f2be5614f018ad2612f50"} Dec 06 06:42:47 crc kubenswrapper[4823]: I1206 06:42:47.217793 4823 scope.go:117] "RemoveContainer" containerID="9a0a6e5ab7b3319e5195b77a49c88c8e85cacf6fe78bb1ce975a97bb19cfd322" Dec 06 06:42:47 crc kubenswrapper[4823]: I1206 06:42:47.217971 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nvcjb" Dec 06 06:42:47 crc kubenswrapper[4823]: I1206 06:42:47.222794 4823 generic.go:334] "Generic (PLEG): container finished" podID="6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e" containerID="f4303e7095050d8ee59cb79e2e49f74fdc34c8eb2e737250750d8a1cf7bfc28a" exitCode=0 Dec 06 06:42:47 crc kubenswrapper[4823]: I1206 06:42:47.222877 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xczmc" event={"ID":"6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e","Type":"ContainerDied","Data":"f4303e7095050d8ee59cb79e2e49f74fdc34c8eb2e737250750d8a1cf7bfc28a"} Dec 06 06:42:47 crc kubenswrapper[4823]: I1206 06:42:47.241248 4823 scope.go:117] "RemoveContainer" containerID="3e7ac25b1fcee0050f60b23a34348bdc0ffb10a89ef9ddd79f62b0e532cef15e" Dec 06 06:42:47 crc kubenswrapper[4823]: I1206 06:42:47.242689 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nb62h" podStartSLOduration=5.745218528 podStartE2EDuration="1m27.242654261s" podCreationTimestamp="2025-12-06 06:41:20 +0000 UTC" firstStartedPulling="2025-12-06 06:41:24.952444814 +0000 UTC m=+986.238196764" lastFinishedPulling="2025-12-06 06:42:46.449880537 +0000 UTC m=+1067.735632497" observedRunningTime="2025-12-06 06:42:47.23468778 +0000 UTC m=+1068.520439760" watchObservedRunningTime="2025-12-06 06:42:47.242654261 +0000 UTC m=+1068.528406221" Dec 06 06:42:47 crc kubenswrapper[4823]: I1206 06:42:47.259846 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nvcjb"] Dec 06 06:42:47 crc kubenswrapper[4823]: I1206 06:42:47.269913 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nvcjb"] Dec 06 06:42:47 crc kubenswrapper[4823]: I1206 06:42:47.275964 4823 scope.go:117] "RemoveContainer" containerID="6a9c33f9d47188f089011433b2a6ec33fbcdf57bd9db2a82f9c96c28b49641f2" Dec 06 06:42:47 crc kubenswrapper[4823]: I1206 06:42:47.292484 4823 scope.go:117] "RemoveContainer" containerID="9a0a6e5ab7b3319e5195b77a49c88c8e85cacf6fe78bb1ce975a97bb19cfd322" Dec 06 06:42:47 crc kubenswrapper[4823]: E1206 06:42:47.292971 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a0a6e5ab7b3319e5195b77a49c88c8e85cacf6fe78bb1ce975a97bb19cfd322\": container with ID starting with 9a0a6e5ab7b3319e5195b77a49c88c8e85cacf6fe78bb1ce975a97bb19cfd322 not found: ID does not exist" containerID="9a0a6e5ab7b3319e5195b77a49c88c8e85cacf6fe78bb1ce975a97bb19cfd322" Dec 06 06:42:47 crc kubenswrapper[4823]: I1206 06:42:47.293067 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a0a6e5ab7b3319e5195b77a49c88c8e85cacf6fe78bb1ce975a97bb19cfd322"} err="failed to get container status \"9a0a6e5ab7b3319e5195b77a49c88c8e85cacf6fe78bb1ce975a97bb19cfd322\": rpc error: code = NotFound desc = could not find container \"9a0a6e5ab7b3319e5195b77a49c88c8e85cacf6fe78bb1ce975a97bb19cfd322\": container with ID starting with 9a0a6e5ab7b3319e5195b77a49c88c8e85cacf6fe78bb1ce975a97bb19cfd322 not found: ID does not exist" Dec 06 06:42:47 crc kubenswrapper[4823]: I1206 06:42:47.293092 4823 scope.go:117] "RemoveContainer" containerID="3e7ac25b1fcee0050f60b23a34348bdc0ffb10a89ef9ddd79f62b0e532cef15e" Dec 06 06:42:47 crc kubenswrapper[4823]: E1206 06:42:47.293563 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e7ac25b1fcee0050f60b23a34348bdc0ffb10a89ef9ddd79f62b0e532cef15e\": container with ID starting with 3e7ac25b1fcee0050f60b23a34348bdc0ffb10a89ef9ddd79f62b0e532cef15e not found: ID does not exist" containerID="3e7ac25b1fcee0050f60b23a34348bdc0ffb10a89ef9ddd79f62b0e532cef15e" Dec 06 06:42:47 crc kubenswrapper[4823]: I1206 06:42:47.293589 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e7ac25b1fcee0050f60b23a34348bdc0ffb10a89ef9ddd79f62b0e532cef15e"} err="failed to get container status \"3e7ac25b1fcee0050f60b23a34348bdc0ffb10a89ef9ddd79f62b0e532cef15e\": rpc error: code = NotFound desc = could not find container \"3e7ac25b1fcee0050f60b23a34348bdc0ffb10a89ef9ddd79f62b0e532cef15e\": container with ID starting with 3e7ac25b1fcee0050f60b23a34348bdc0ffb10a89ef9ddd79f62b0e532cef15e not found: ID does not exist" Dec 06 06:42:47 crc kubenswrapper[4823]: I1206 06:42:47.293606 4823 scope.go:117] "RemoveContainer" containerID="6a9c33f9d47188f089011433b2a6ec33fbcdf57bd9db2a82f9c96c28b49641f2" Dec 06 06:42:47 crc kubenswrapper[4823]: E1206 06:42:47.294017 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a9c33f9d47188f089011433b2a6ec33fbcdf57bd9db2a82f9c96c28b49641f2\": container with ID starting with 6a9c33f9d47188f089011433b2a6ec33fbcdf57bd9db2a82f9c96c28b49641f2 not found: ID does not exist" containerID="6a9c33f9d47188f089011433b2a6ec33fbcdf57bd9db2a82f9c96c28b49641f2" Dec 06 06:42:47 crc kubenswrapper[4823]: I1206 06:42:47.294043 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a9c33f9d47188f089011433b2a6ec33fbcdf57bd9db2a82f9c96c28b49641f2"} err="failed to get container status \"6a9c33f9d47188f089011433b2a6ec33fbcdf57bd9db2a82f9c96c28b49641f2\": rpc error: code = NotFound desc = could not find container \"6a9c33f9d47188f089011433b2a6ec33fbcdf57bd9db2a82f9c96c28b49641f2\": container with ID starting with 6a9c33f9d47188f089011433b2a6ec33fbcdf57bd9db2a82f9c96c28b49641f2 not found: ID does not exist" Dec 06 06:42:48 crc kubenswrapper[4823]: I1206 06:42:48.239795 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xczmc" event={"ID":"6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e","Type":"ContainerStarted","Data":"990023bf290935f8e2060a3d3c2cdb0fbbd3fcc5d2f3688f16cf181ab07702e5"} Dec 06 06:42:48 crc kubenswrapper[4823]: I1206 06:42:48.268941 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xczmc" podStartSLOduration=6.6674499130000004 podStartE2EDuration="1m29.268915423s" podCreationTimestamp="2025-12-06 06:41:19 +0000 UTC" firstStartedPulling="2025-12-06 06:41:25.089160137 +0000 UTC m=+986.374912097" lastFinishedPulling="2025-12-06 06:42:47.690625647 +0000 UTC m=+1068.976377607" observedRunningTime="2025-12-06 06:42:48.261593891 +0000 UTC m=+1069.547345871" watchObservedRunningTime="2025-12-06 06:42:48.268915423 +0000 UTC m=+1069.554667383" Dec 06 06:42:49 crc kubenswrapper[4823]: I1206 06:42:49.153170 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c87fe9ea-0538-45b9-bc58-630991438ad5" path="/var/lib/kubelet/pods/c87fe9ea-0538-45b9-bc58-630991438ad5/volumes" Dec 06 06:42:50 crc kubenswrapper[4823]: I1206 06:42:50.584928 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xczmc" Dec 06 06:42:50 crc kubenswrapper[4823]: I1206 06:42:50.585010 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xczmc" Dec 06 06:42:50 crc kubenswrapper[4823]: I1206 06:42:50.659740 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xczmc" Dec 06 06:42:51 crc kubenswrapper[4823]: I1206 06:42:51.588006 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nb62h" Dec 06 06:43:00 crc kubenswrapper[4823]: I1206 06:43:00.636887 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xczmc" Dec 06 06:43:00 crc kubenswrapper[4823]: I1206 06:43:00.687736 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xczmc"] Dec 06 06:43:01 crc kubenswrapper[4823]: I1206 06:43:01.340959 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xczmc" podUID="6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e" containerName="registry-server" containerID="cri-o://990023bf290935f8e2060a3d3c2cdb0fbbd3fcc5d2f3688f16cf181ab07702e5" gracePeriod=2 Dec 06 06:43:03 crc kubenswrapper[4823]: I1206 06:43:03.358006 4823 generic.go:334] "Generic (PLEG): container finished" podID="6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e" containerID="990023bf290935f8e2060a3d3c2cdb0fbbd3fcc5d2f3688f16cf181ab07702e5" exitCode=0 Dec 06 06:43:03 crc kubenswrapper[4823]: I1206 06:43:03.358085 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xczmc" event={"ID":"6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e","Type":"ContainerDied","Data":"990023bf290935f8e2060a3d3c2cdb0fbbd3fcc5d2f3688f16cf181ab07702e5"} Dec 06 06:43:03 crc kubenswrapper[4823]: I1206 06:43:03.611506 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xczmc" Dec 06 06:43:03 crc kubenswrapper[4823]: I1206 06:43:03.808613 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e-catalog-content\") pod \"6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e\" (UID: \"6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e\") " Dec 06 06:43:03 crc kubenswrapper[4823]: I1206 06:43:03.808743 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e-utilities\") pod \"6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e\" (UID: \"6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e\") " Dec 06 06:43:03 crc kubenswrapper[4823]: I1206 06:43:03.808806 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnvd4\" (UniqueName: \"kubernetes.io/projected/6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e-kube-api-access-rnvd4\") pod \"6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e\" (UID: \"6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e\") " Dec 06 06:43:03 crc kubenswrapper[4823]: I1206 06:43:03.813848 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e-utilities" (OuterVolumeSpecName: "utilities") pod "6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e" (UID: "6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:43:03 crc kubenswrapper[4823]: I1206 06:43:03.828043 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e-kube-api-access-rnvd4" (OuterVolumeSpecName: "kube-api-access-rnvd4") pod "6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e" (UID: "6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e"). InnerVolumeSpecName "kube-api-access-rnvd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:43:03 crc kubenswrapper[4823]: I1206 06:43:03.875327 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e" (UID: "6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:43:03 crc kubenswrapper[4823]: I1206 06:43:03.910852 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:43:03 crc kubenswrapper[4823]: I1206 06:43:03.910897 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:43:03 crc kubenswrapper[4823]: I1206 06:43:03.910911 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnvd4\" (UniqueName: \"kubernetes.io/projected/6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e-kube-api-access-rnvd4\") on node \"crc\" DevicePath \"\"" Dec 06 06:43:04 crc kubenswrapper[4823]: I1206 06:43:04.368535 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xczmc" event={"ID":"6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e","Type":"ContainerDied","Data":"45a1dda0af9f834a50306612f4c55a059932ec9ad8bac0970eeb4ea91d92a992"} Dec 06 06:43:04 crc kubenswrapper[4823]: I1206 06:43:04.368580 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xczmc" Dec 06 06:43:04 crc kubenswrapper[4823]: I1206 06:43:04.368874 4823 scope.go:117] "RemoveContainer" containerID="990023bf290935f8e2060a3d3c2cdb0fbbd3fcc5d2f3688f16cf181ab07702e5" Dec 06 06:43:04 crc kubenswrapper[4823]: I1206 06:43:04.394492 4823 scope.go:117] "RemoveContainer" containerID="f4303e7095050d8ee59cb79e2e49f74fdc34c8eb2e737250750d8a1cf7bfc28a" Dec 06 06:43:04 crc kubenswrapper[4823]: I1206 06:43:04.411145 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xczmc"] Dec 06 06:43:04 crc kubenswrapper[4823]: I1206 06:43:04.417360 4823 scope.go:117] "RemoveContainer" containerID="4e156fe182bd230fc4881563a353ffdd4ccd6215b7d7be46bdc61a7fe232ec2d" Dec 06 06:43:04 crc kubenswrapper[4823]: I1206 06:43:04.418504 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xczmc"] Dec 06 06:43:05 crc kubenswrapper[4823]: I1206 06:43:05.155531 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e" path="/var/lib/kubelet/pods/6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e/volumes" Dec 06 06:43:08 crc kubenswrapper[4823]: I1206 06:43:08.843745 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b55f8b79c-gfvlq"] Dec 06 06:43:08 crc kubenswrapper[4823]: E1206 06:43:08.844440 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c87fe9ea-0538-45b9-bc58-630991438ad5" containerName="extract-content" Dec 06 06:43:08 crc kubenswrapper[4823]: I1206 06:43:08.844456 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c87fe9ea-0538-45b9-bc58-630991438ad5" containerName="extract-content" Dec 06 06:43:08 crc kubenswrapper[4823]: E1206 06:43:08.844487 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c87fe9ea-0538-45b9-bc58-630991438ad5" containerName="extract-utilities" Dec 06 06:43:08 crc kubenswrapper[4823]: I1206 06:43:08.844500 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c87fe9ea-0538-45b9-bc58-630991438ad5" containerName="extract-utilities" Dec 06 06:43:08 crc kubenswrapper[4823]: E1206 06:43:08.844511 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e" containerName="registry-server" Dec 06 06:43:08 crc kubenswrapper[4823]: I1206 06:43:08.844520 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e" containerName="registry-server" Dec 06 06:43:08 crc kubenswrapper[4823]: E1206 06:43:08.844538 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e" containerName="extract-content" Dec 06 06:43:08 crc kubenswrapper[4823]: I1206 06:43:08.844545 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e" containerName="extract-content" Dec 06 06:43:08 crc kubenswrapper[4823]: E1206 06:43:08.844565 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c87fe9ea-0538-45b9-bc58-630991438ad5" containerName="registry-server" Dec 06 06:43:08 crc kubenswrapper[4823]: I1206 06:43:08.844573 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c87fe9ea-0538-45b9-bc58-630991438ad5" containerName="registry-server" Dec 06 06:43:08 crc kubenswrapper[4823]: E1206 06:43:08.844600 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e" containerName="extract-utilities" Dec 06 06:43:08 crc kubenswrapper[4823]: I1206 06:43:08.844608 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e" containerName="extract-utilities" Dec 06 06:43:08 crc kubenswrapper[4823]: I1206 06:43:08.844877 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c87fe9ea-0538-45b9-bc58-630991438ad5" containerName="registry-server" Dec 06 06:43:08 crc kubenswrapper[4823]: I1206 06:43:08.844891 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f5f7006-4db4-4e09-83e0-f7a1ef5d3f2e" containerName="registry-server" Dec 06 06:43:08 crc kubenswrapper[4823]: I1206 06:43:08.845924 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b55f8b79c-gfvlq" Dec 06 06:43:08 crc kubenswrapper[4823]: I1206 06:43:08.848064 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 06 06:43:08 crc kubenswrapper[4823]: I1206 06:43:08.850027 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 06 06:43:08 crc kubenswrapper[4823]: I1206 06:43:08.850745 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 06 06:43:08 crc kubenswrapper[4823]: I1206 06:43:08.851431 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-ps5s2" Dec 06 06:43:08 crc kubenswrapper[4823]: I1206 06:43:08.862567 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b55f8b79c-gfvlq"] Dec 06 06:43:08 crc kubenswrapper[4823]: I1206 06:43:08.928648 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-868658cdc7-65l7h"] Dec 06 06:43:08 crc kubenswrapper[4823]: I1206 06:43:08.929962 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-868658cdc7-65l7h" Dec 06 06:43:08 crc kubenswrapper[4823]: W1206 06:43:08.933047 4823 reflector.go:561] object-"openstack"/"dns-svc": failed to list *v1.ConfigMap: configmaps "dns-svc" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Dec 06 06:43:08 crc kubenswrapper[4823]: E1206 06:43:08.933092 4823 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dns-svc\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"dns-svc\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 06 06:43:08 crc kubenswrapper[4823]: I1206 06:43:08.953831 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-868658cdc7-65l7h"] Dec 06 06:43:08 crc kubenswrapper[4823]: I1206 06:43:08.981647 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sptbj\" (UniqueName: \"kubernetes.io/projected/97ea53a4-6c8c-452a-86d8-359d204ce8bc-kube-api-access-sptbj\") pod \"dnsmasq-dns-6b55f8b79c-gfvlq\" (UID: \"97ea53a4-6c8c-452a-86d8-359d204ce8bc\") " pod="openstack/dnsmasq-dns-6b55f8b79c-gfvlq" Dec 06 06:43:08 crc kubenswrapper[4823]: I1206 06:43:08.982233 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97ea53a4-6c8c-452a-86d8-359d204ce8bc-config\") pod \"dnsmasq-dns-6b55f8b79c-gfvlq\" (UID: \"97ea53a4-6c8c-452a-86d8-359d204ce8bc\") " pod="openstack/dnsmasq-dns-6b55f8b79c-gfvlq" Dec 06 06:43:09 crc kubenswrapper[4823]: I1206 06:43:09.083758 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/659c7501-73e3-4976-a80e-4874a005a2f3-config\") pod \"dnsmasq-dns-868658cdc7-65l7h\" (UID: \"659c7501-73e3-4976-a80e-4874a005a2f3\") " pod="openstack/dnsmasq-dns-868658cdc7-65l7h" Dec 06 06:43:09 crc kubenswrapper[4823]: I1206 06:43:09.083821 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97ea53a4-6c8c-452a-86d8-359d204ce8bc-config\") pod \"dnsmasq-dns-6b55f8b79c-gfvlq\" (UID: \"97ea53a4-6c8c-452a-86d8-359d204ce8bc\") " pod="openstack/dnsmasq-dns-6b55f8b79c-gfvlq" Dec 06 06:43:09 crc kubenswrapper[4823]: I1206 06:43:09.083851 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx6b7\" (UniqueName: \"kubernetes.io/projected/659c7501-73e3-4976-a80e-4874a005a2f3-kube-api-access-dx6b7\") pod \"dnsmasq-dns-868658cdc7-65l7h\" (UID: \"659c7501-73e3-4976-a80e-4874a005a2f3\") " pod="openstack/dnsmasq-dns-868658cdc7-65l7h" Dec 06 06:43:09 crc kubenswrapper[4823]: I1206 06:43:09.083908 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/659c7501-73e3-4976-a80e-4874a005a2f3-dns-svc\") pod \"dnsmasq-dns-868658cdc7-65l7h\" (UID: \"659c7501-73e3-4976-a80e-4874a005a2f3\") " pod="openstack/dnsmasq-dns-868658cdc7-65l7h" Dec 06 06:43:09 crc kubenswrapper[4823]: I1206 06:43:09.084046 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sptbj\" (UniqueName: \"kubernetes.io/projected/97ea53a4-6c8c-452a-86d8-359d204ce8bc-kube-api-access-sptbj\") pod \"dnsmasq-dns-6b55f8b79c-gfvlq\" (UID: \"97ea53a4-6c8c-452a-86d8-359d204ce8bc\") " pod="openstack/dnsmasq-dns-6b55f8b79c-gfvlq" Dec 06 06:43:09 crc kubenswrapper[4823]: I1206 06:43:09.084850 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97ea53a4-6c8c-452a-86d8-359d204ce8bc-config\") pod \"dnsmasq-dns-6b55f8b79c-gfvlq\" (UID: \"97ea53a4-6c8c-452a-86d8-359d204ce8bc\") " pod="openstack/dnsmasq-dns-6b55f8b79c-gfvlq" Dec 06 06:43:09 crc kubenswrapper[4823]: I1206 06:43:09.109495 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sptbj\" (UniqueName: \"kubernetes.io/projected/97ea53a4-6c8c-452a-86d8-359d204ce8bc-kube-api-access-sptbj\") pod \"dnsmasq-dns-6b55f8b79c-gfvlq\" (UID: \"97ea53a4-6c8c-452a-86d8-359d204ce8bc\") " pod="openstack/dnsmasq-dns-6b55f8b79c-gfvlq" Dec 06 06:43:09 crc kubenswrapper[4823]: I1206 06:43:09.166161 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b55f8b79c-gfvlq" Dec 06 06:43:09 crc kubenswrapper[4823]: I1206 06:43:09.185197 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/659c7501-73e3-4976-a80e-4874a005a2f3-dns-svc\") pod \"dnsmasq-dns-868658cdc7-65l7h\" (UID: \"659c7501-73e3-4976-a80e-4874a005a2f3\") " pod="openstack/dnsmasq-dns-868658cdc7-65l7h" Dec 06 06:43:09 crc kubenswrapper[4823]: I1206 06:43:09.185326 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/659c7501-73e3-4976-a80e-4874a005a2f3-config\") pod \"dnsmasq-dns-868658cdc7-65l7h\" (UID: \"659c7501-73e3-4976-a80e-4874a005a2f3\") " pod="openstack/dnsmasq-dns-868658cdc7-65l7h" Dec 06 06:43:09 crc kubenswrapper[4823]: I1206 06:43:09.185390 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx6b7\" (UniqueName: \"kubernetes.io/projected/659c7501-73e3-4976-a80e-4874a005a2f3-kube-api-access-dx6b7\") pod \"dnsmasq-dns-868658cdc7-65l7h\" (UID: \"659c7501-73e3-4976-a80e-4874a005a2f3\") " pod="openstack/dnsmasq-dns-868658cdc7-65l7h" Dec 06 06:43:09 crc kubenswrapper[4823]: I1206 06:43:09.186443 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/659c7501-73e3-4976-a80e-4874a005a2f3-config\") pod \"dnsmasq-dns-868658cdc7-65l7h\" (UID: \"659c7501-73e3-4976-a80e-4874a005a2f3\") " pod="openstack/dnsmasq-dns-868658cdc7-65l7h" Dec 06 06:43:09 crc kubenswrapper[4823]: I1206 06:43:09.204130 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx6b7\" (UniqueName: \"kubernetes.io/projected/659c7501-73e3-4976-a80e-4874a005a2f3-kube-api-access-dx6b7\") pod \"dnsmasq-dns-868658cdc7-65l7h\" (UID: \"659c7501-73e3-4976-a80e-4874a005a2f3\") " pod="openstack/dnsmasq-dns-868658cdc7-65l7h" Dec 06 06:43:09 crc kubenswrapper[4823]: I1206 06:43:09.608518 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b55f8b79c-gfvlq"] Dec 06 06:43:10 crc kubenswrapper[4823]: I1206 06:43:10.061221 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 06 06:43:10 crc kubenswrapper[4823]: I1206 06:43:10.067129 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/659c7501-73e3-4976-a80e-4874a005a2f3-dns-svc\") pod \"dnsmasq-dns-868658cdc7-65l7h\" (UID: \"659c7501-73e3-4976-a80e-4874a005a2f3\") " pod="openstack/dnsmasq-dns-868658cdc7-65l7h" Dec 06 06:43:10 crc kubenswrapper[4823]: I1206 06:43:10.146054 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-868658cdc7-65l7h" Dec 06 06:43:10 crc kubenswrapper[4823]: I1206 06:43:10.419705 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b55f8b79c-gfvlq" event={"ID":"97ea53a4-6c8c-452a-86d8-359d204ce8bc","Type":"ContainerStarted","Data":"52a462dd30a046814558adafc4767e95599b442c1d04297b3742bd058e71d14e"} Dec 06 06:43:10 crc kubenswrapper[4823]: I1206 06:43:10.632475 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-868658cdc7-65l7h"] Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.196724 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b55f8b79c-gfvlq"] Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.227582 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b588dc5c-k5ddq"] Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.231311 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b588dc5c-k5ddq" Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.237078 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b588dc5c-k5ddq"] Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.326479 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb50f6e-02f9-4b9d-b79a-c73439e842c5-config\") pod \"dnsmasq-dns-b588dc5c-k5ddq\" (UID: \"bfb50f6e-02f9-4b9d-b79a-c73439e842c5\") " pod="openstack/dnsmasq-dns-b588dc5c-k5ddq" Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.326784 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpwlt\" (UniqueName: \"kubernetes.io/projected/bfb50f6e-02f9-4b9d-b79a-c73439e842c5-kube-api-access-bpwlt\") pod \"dnsmasq-dns-b588dc5c-k5ddq\" (UID: \"bfb50f6e-02f9-4b9d-b79a-c73439e842c5\") " pod="openstack/dnsmasq-dns-b588dc5c-k5ddq" Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.326826 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfb50f6e-02f9-4b9d-b79a-c73439e842c5-dns-svc\") pod \"dnsmasq-dns-b588dc5c-k5ddq\" (UID: \"bfb50f6e-02f9-4b9d-b79a-c73439e842c5\") " pod="openstack/dnsmasq-dns-b588dc5c-k5ddq" Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.427621 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfb50f6e-02f9-4b9d-b79a-c73439e842c5-dns-svc\") pod \"dnsmasq-dns-b588dc5c-k5ddq\" (UID: \"bfb50f6e-02f9-4b9d-b79a-c73439e842c5\") " pod="openstack/dnsmasq-dns-b588dc5c-k5ddq" Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.427724 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb50f6e-02f9-4b9d-b79a-c73439e842c5-config\") pod \"dnsmasq-dns-b588dc5c-k5ddq\" (UID: \"bfb50f6e-02f9-4b9d-b79a-c73439e842c5\") " pod="openstack/dnsmasq-dns-b588dc5c-k5ddq" Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.427762 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpwlt\" (UniqueName: \"kubernetes.io/projected/bfb50f6e-02f9-4b9d-b79a-c73439e842c5-kube-api-access-bpwlt\") pod \"dnsmasq-dns-b588dc5c-k5ddq\" (UID: \"bfb50f6e-02f9-4b9d-b79a-c73439e842c5\") " pod="openstack/dnsmasq-dns-b588dc5c-k5ddq" Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.429000 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfb50f6e-02f9-4b9d-b79a-c73439e842c5-dns-svc\") pod \"dnsmasq-dns-b588dc5c-k5ddq\" (UID: \"bfb50f6e-02f9-4b9d-b79a-c73439e842c5\") " pod="openstack/dnsmasq-dns-b588dc5c-k5ddq" Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.429259 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb50f6e-02f9-4b9d-b79a-c73439e842c5-config\") pod \"dnsmasq-dns-b588dc5c-k5ddq\" (UID: \"bfb50f6e-02f9-4b9d-b79a-c73439e842c5\") " pod="openstack/dnsmasq-dns-b588dc5c-k5ddq" Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.461614 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-868658cdc7-65l7h" event={"ID":"659c7501-73e3-4976-a80e-4874a005a2f3","Type":"ContainerStarted","Data":"6be2705bb75852b640b50e4f6e06bb703a5833f6e4c47a0b85753584784c9da8"} Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.467104 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpwlt\" (UniqueName: \"kubernetes.io/projected/bfb50f6e-02f9-4b9d-b79a-c73439e842c5-kube-api-access-bpwlt\") pod \"dnsmasq-dns-b588dc5c-k5ddq\" (UID: \"bfb50f6e-02f9-4b9d-b79a-c73439e842c5\") " pod="openstack/dnsmasq-dns-b588dc5c-k5ddq" Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.521362 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-868658cdc7-65l7h"] Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.536316 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7657cbb67-9t7pq"] Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.538842 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7657cbb67-9t7pq" Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.545344 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7657cbb67-9t7pq"] Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.565289 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b588dc5c-k5ddq" Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.633123 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f319773b-70ba-4e72-ad29-46a902567c5a-config\") pod \"dnsmasq-dns-7657cbb67-9t7pq\" (UID: \"f319773b-70ba-4e72-ad29-46a902567c5a\") " pod="openstack/dnsmasq-dns-7657cbb67-9t7pq" Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.633331 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f319773b-70ba-4e72-ad29-46a902567c5a-dns-svc\") pod \"dnsmasq-dns-7657cbb67-9t7pq\" (UID: \"f319773b-70ba-4e72-ad29-46a902567c5a\") " pod="openstack/dnsmasq-dns-7657cbb67-9t7pq" Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.633399 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgscc\" (UniqueName: \"kubernetes.io/projected/f319773b-70ba-4e72-ad29-46a902567c5a-kube-api-access-vgscc\") pod \"dnsmasq-dns-7657cbb67-9t7pq\" (UID: \"f319773b-70ba-4e72-ad29-46a902567c5a\") " pod="openstack/dnsmasq-dns-7657cbb67-9t7pq" Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.734369 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f319773b-70ba-4e72-ad29-46a902567c5a-dns-svc\") pod \"dnsmasq-dns-7657cbb67-9t7pq\" (UID: \"f319773b-70ba-4e72-ad29-46a902567c5a\") " pod="openstack/dnsmasq-dns-7657cbb67-9t7pq" Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.734437 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgscc\" (UniqueName: \"kubernetes.io/projected/f319773b-70ba-4e72-ad29-46a902567c5a-kube-api-access-vgscc\") pod \"dnsmasq-dns-7657cbb67-9t7pq\" (UID: \"f319773b-70ba-4e72-ad29-46a902567c5a\") " pod="openstack/dnsmasq-dns-7657cbb67-9t7pq" Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.734569 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f319773b-70ba-4e72-ad29-46a902567c5a-config\") pod \"dnsmasq-dns-7657cbb67-9t7pq\" (UID: \"f319773b-70ba-4e72-ad29-46a902567c5a\") " pod="openstack/dnsmasq-dns-7657cbb67-9t7pq" Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.735395 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f319773b-70ba-4e72-ad29-46a902567c5a-dns-svc\") pod \"dnsmasq-dns-7657cbb67-9t7pq\" (UID: \"f319773b-70ba-4e72-ad29-46a902567c5a\") " pod="openstack/dnsmasq-dns-7657cbb67-9t7pq" Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.735608 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f319773b-70ba-4e72-ad29-46a902567c5a-config\") pod \"dnsmasq-dns-7657cbb67-9t7pq\" (UID: \"f319773b-70ba-4e72-ad29-46a902567c5a\") " pod="openstack/dnsmasq-dns-7657cbb67-9t7pq" Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.757788 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgscc\" (UniqueName: \"kubernetes.io/projected/f319773b-70ba-4e72-ad29-46a902567c5a-kube-api-access-vgscc\") pod \"dnsmasq-dns-7657cbb67-9t7pq\" (UID: \"f319773b-70ba-4e72-ad29-46a902567c5a\") " pod="openstack/dnsmasq-dns-7657cbb67-9t7pq" Dec 06 06:43:11 crc kubenswrapper[4823]: I1206 06:43:11.869396 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7657cbb67-9t7pq" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.126323 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b588dc5c-k5ddq"] Dec 06 06:43:12 crc kubenswrapper[4823]: W1206 06:43:12.151774 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfb50f6e_02f9_4b9d_b79a_c73439e842c5.slice/crio-75f8b1f84b76a7371fd5b7a602a8d68354cdfab8a3f93a935b8aa66e1dc5054d WatchSource:0}: Error finding container 75f8b1f84b76a7371fd5b7a602a8d68354cdfab8a3f93a935b8aa66e1dc5054d: Status 404 returned error can't find the container with id 75f8b1f84b76a7371fd5b7a602a8d68354cdfab8a3f93a935b8aa66e1dc5054d Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.394490 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7657cbb67-9t7pq"] Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.409293 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.410908 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.440191 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.444013 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.444270 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.444507 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-jp4j4" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.444699 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.444707 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.458802 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.470425 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.492324 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b588dc5c-k5ddq" event={"ID":"bfb50f6e-02f9-4b9d-b79a-c73439e842c5","Type":"ContainerStarted","Data":"75f8b1f84b76a7371fd5b7a602a8d68354cdfab8a3f93a935b8aa66e1dc5054d"} Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.546973 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d65f7695-zgtjz"] Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.554803 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c7ecce4-d359-486f-9386-057202b69efd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.554917 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.554980 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4q4v\" (UniqueName: \"kubernetes.io/projected/3c7ecce4-d359-486f-9386-057202b69efd-kube-api-access-s4q4v\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.555073 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c7ecce4-d359-486f-9386-057202b69efd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.555114 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c7ecce4-d359-486f-9386-057202b69efd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.555162 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c7ecce4-d359-486f-9386-057202b69efd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.555207 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c7ecce4-d359-486f-9386-057202b69efd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.555238 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c7ecce4-d359-486f-9386-057202b69efd-config-data\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.555263 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c7ecce4-d359-486f-9386-057202b69efd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.555286 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c7ecce4-d359-486f-9386-057202b69efd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.555318 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c7ecce4-d359-486f-9386-057202b69efd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.562294 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d65f7695-zgtjz" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.689201 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7657cbb67-9t7pq"] Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.691844 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c7ecce4-d359-486f-9386-057202b69efd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.691934 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c7ecce4-d359-486f-9386-057202b69efd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.691981 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stplq\" (UniqueName: \"kubernetes.io/projected/8267e31c-32b9-4640-90f9-9078920f64d5-kube-api-access-stplq\") pod \"dnsmasq-dns-5d65f7695-zgtjz\" (UID: \"8267e31c-32b9-4640-90f9-9078920f64d5\") " pod="openstack/dnsmasq-dns-5d65f7695-zgtjz" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.692017 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.692046 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4q4v\" (UniqueName: \"kubernetes.io/projected/3c7ecce4-d359-486f-9386-057202b69efd-kube-api-access-s4q4v\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.692113 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c7ecce4-d359-486f-9386-057202b69efd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.692144 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c7ecce4-d359-486f-9386-057202b69efd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.692186 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c7ecce4-d359-486f-9386-057202b69efd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.692217 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8267e31c-32b9-4640-90f9-9078920f64d5-dns-svc\") pod \"dnsmasq-dns-5d65f7695-zgtjz\" (UID: \"8267e31c-32b9-4640-90f9-9078920f64d5\") " pod="openstack/dnsmasq-dns-5d65f7695-zgtjz" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.692247 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8267e31c-32b9-4640-90f9-9078920f64d5-config\") pod \"dnsmasq-dns-5d65f7695-zgtjz\" (UID: \"8267e31c-32b9-4640-90f9-9078920f64d5\") " pod="openstack/dnsmasq-dns-5d65f7695-zgtjz" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.692277 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c7ecce4-d359-486f-9386-057202b69efd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.692305 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c7ecce4-d359-486f-9386-057202b69efd-config-data\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.692325 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c7ecce4-d359-486f-9386-057202b69efd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.692348 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c7ecce4-d359-486f-9386-057202b69efd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.698287 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c7ecce4-d359-486f-9386-057202b69efd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.698740 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.699000 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c7ecce4-d359-486f-9386-057202b69efd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.700258 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c7ecce4-d359-486f-9386-057202b69efd-config-data\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.704075 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c7ecce4-d359-486f-9386-057202b69efd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.722408 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d65f7695-zgtjz"] Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.723318 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c7ecce4-d359-486f-9386-057202b69efd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.760938 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c7ecce4-d359-486f-9386-057202b69efd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.762813 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c7ecce4-d359-486f-9386-057202b69efd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.766378 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c7ecce4-d359-486f-9386-057202b69efd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.772879 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c7ecce4-d359-486f-9386-057202b69efd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.778027 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.794676 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stplq\" (UniqueName: \"kubernetes.io/projected/8267e31c-32b9-4640-90f9-9078920f64d5-kube-api-access-stplq\") pod \"dnsmasq-dns-5d65f7695-zgtjz\" (UID: \"8267e31c-32b9-4640-90f9-9078920f64d5\") " pod="openstack/dnsmasq-dns-5d65f7695-zgtjz" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.794839 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8267e31c-32b9-4640-90f9-9078920f64d5-dns-svc\") pod \"dnsmasq-dns-5d65f7695-zgtjz\" (UID: \"8267e31c-32b9-4640-90f9-9078920f64d5\") " pod="openstack/dnsmasq-dns-5d65f7695-zgtjz" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.794870 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8267e31c-32b9-4640-90f9-9078920f64d5-config\") pod \"dnsmasq-dns-5d65f7695-zgtjz\" (UID: \"8267e31c-32b9-4640-90f9-9078920f64d5\") " pod="openstack/dnsmasq-dns-5d65f7695-zgtjz" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.800724 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4q4v\" (UniqueName: \"kubernetes.io/projected/3c7ecce4-d359-486f-9386-057202b69efd-kube-api-access-s4q4v\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.809808 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8267e31c-32b9-4640-90f9-9078920f64d5-config\") pod \"dnsmasq-dns-5d65f7695-zgtjz\" (UID: \"8267e31c-32b9-4640-90f9-9078920f64d5\") " pod="openstack/dnsmasq-dns-5d65f7695-zgtjz" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.818557 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8267e31c-32b9-4640-90f9-9078920f64d5-dns-svc\") pod \"dnsmasq-dns-5d65f7695-zgtjz\" (UID: \"8267e31c-32b9-4640-90f9-9078920f64d5\") " pod="openstack/dnsmasq-dns-5d65f7695-zgtjz" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.841307 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " pod="openstack/rabbitmq-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.863814 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.863943 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.869534 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.869743 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.872747 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-svzwv" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.874964 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.875714 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.888793 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.889015 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.890748 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stplq\" (UniqueName: \"kubernetes.io/projected/8267e31c-32b9-4640-90f9-9078920f64d5-kube-api-access-stplq\") pod \"dnsmasq-dns-5d65f7695-zgtjz\" (UID: \"8267e31c-32b9-4640-90f9-9078920f64d5\") " pod="openstack/dnsmasq-dns-5d65f7695-zgtjz" Dec 06 06:43:12 crc kubenswrapper[4823]: I1206 06:43:12.920516 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d65f7695-zgtjz" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:12.998484 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/807fbfb1-90fe-4325-a0ac-09b309c77172-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:12.998540 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/807fbfb1-90fe-4325-a0ac-09b309c77172-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:12.998913 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/807fbfb1-90fe-4325-a0ac-09b309c77172-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:12.998953 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fvrw\" (UniqueName: \"kubernetes.io/projected/807fbfb1-90fe-4325-a0ac-09b309c77172-kube-api-access-4fvrw\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:12.998971 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/807fbfb1-90fe-4325-a0ac-09b309c77172-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:12.998996 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/807fbfb1-90fe-4325-a0ac-09b309c77172-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:12.999059 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/807fbfb1-90fe-4325-a0ac-09b309c77172-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:12.999082 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/807fbfb1-90fe-4325-a0ac-09b309c77172-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:12.999137 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:12.999189 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/807fbfb1-90fe-4325-a0ac-09b309c77172-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:12.999289 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/807fbfb1-90fe-4325-a0ac-09b309c77172-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.038115 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.099849 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.099915 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/807fbfb1-90fe-4325-a0ac-09b309c77172-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.099951 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/807fbfb1-90fe-4325-a0ac-09b309c77172-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.099986 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/807fbfb1-90fe-4325-a0ac-09b309c77172-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.100006 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/807fbfb1-90fe-4325-a0ac-09b309c77172-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.100055 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/807fbfb1-90fe-4325-a0ac-09b309c77172-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.100075 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fvrw\" (UniqueName: \"kubernetes.io/projected/807fbfb1-90fe-4325-a0ac-09b309c77172-kube-api-access-4fvrw\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.100093 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/807fbfb1-90fe-4325-a0ac-09b309c77172-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.100110 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/807fbfb1-90fe-4325-a0ac-09b309c77172-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.100150 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/807fbfb1-90fe-4325-a0ac-09b309c77172-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.100184 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/807fbfb1-90fe-4325-a0ac-09b309c77172-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.100531 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/807fbfb1-90fe-4325-a0ac-09b309c77172-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.100558 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/807fbfb1-90fe-4325-a0ac-09b309c77172-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.100889 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.101468 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/807fbfb1-90fe-4325-a0ac-09b309c77172-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.103218 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/807fbfb1-90fe-4325-a0ac-09b309c77172-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.103913 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/807fbfb1-90fe-4325-a0ac-09b309c77172-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.105878 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/807fbfb1-90fe-4325-a0ac-09b309c77172-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.108605 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/807fbfb1-90fe-4325-a0ac-09b309c77172-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.114157 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/807fbfb1-90fe-4325-a0ac-09b309c77172-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.114542 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/807fbfb1-90fe-4325-a0ac-09b309c77172-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.123933 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fvrw\" (UniqueName: \"kubernetes.io/projected/807fbfb1-90fe-4325-a0ac-09b309c77172-kube-api-access-4fvrw\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.137416 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.223606 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.532832 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7657cbb67-9t7pq" event={"ID":"f319773b-70ba-4e72-ad29-46a902567c5a","Type":"ContainerStarted","Data":"01d50d769688992e4e7b68a08ffce457b9d14f728883c5ebb5f0e732f3ad6794"} Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.820984 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.823377 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.830464 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-erlang-cookie" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.831176 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-config-data" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.837503 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-plugins-conf" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.837780 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-notifications-svc" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.837947 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-default-user" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.838774 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-server-conf" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.842529 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-server-dockercfg-nx8cr" Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.873028 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Dec 06 06:43:13 crc kubenswrapper[4823]: I1206 06:43:13.899167 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 06:43:13 crc kubenswrapper[4823]: W1206 06:43:13.909895 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c7ecce4_d359_486f_9386_057202b69efd.slice/crio-3d89300eb588a7ae70f6ff08e7c1b1382b862f8f83efaa74acb638169e50888a WatchSource:0}: Error finding container 3d89300eb588a7ae70f6ff08e7c1b1382b862f8f83efaa74acb638169e50888a: Status 404 returned error can't find the container with id 3d89300eb588a7ae70f6ff08e7c1b1382b862f8f83efaa74acb638169e50888a Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.015727 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b6649430-bcca-4949-82d4-f15ac31f36e1-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.015835 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b6649430-bcca-4949-82d4-f15ac31f36e1-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.015929 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b6649430-bcca-4949-82d4-f15ac31f36e1-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.015958 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b6649430-bcca-4949-82d4-f15ac31f36e1-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.015978 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b6649430-bcca-4949-82d4-f15ac31f36e1-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.016045 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.016061 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b6649430-bcca-4949-82d4-f15ac31f36e1-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.016080 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l65b\" (UniqueName: \"kubernetes.io/projected/b6649430-bcca-4949-82d4-f15ac31f36e1-kube-api-access-9l65b\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.016174 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b6649430-bcca-4949-82d4-f15ac31f36e1-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.016201 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b6649430-bcca-4949-82d4-f15ac31f36e1-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.016255 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6649430-bcca-4949-82d4-f15ac31f36e1-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.117717 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6649430-bcca-4949-82d4-f15ac31f36e1-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.117783 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b6649430-bcca-4949-82d4-f15ac31f36e1-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.117847 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b6649430-bcca-4949-82d4-f15ac31f36e1-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.117901 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b6649430-bcca-4949-82d4-f15ac31f36e1-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.117924 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b6649430-bcca-4949-82d4-f15ac31f36e1-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.117952 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b6649430-bcca-4949-82d4-f15ac31f36e1-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.118024 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.118068 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b6649430-bcca-4949-82d4-f15ac31f36e1-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.118101 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l65b\" (UniqueName: \"kubernetes.io/projected/b6649430-bcca-4949-82d4-f15ac31f36e1-kube-api-access-9l65b\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.118197 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b6649430-bcca-4949-82d4-f15ac31f36e1-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.118232 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b6649430-bcca-4949-82d4-f15ac31f36e1-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.118858 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b6649430-bcca-4949-82d4-f15ac31f36e1-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.119104 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6649430-bcca-4949-82d4-f15ac31f36e1-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.119124 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b6649430-bcca-4949-82d4-f15ac31f36e1-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.119344 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.127232 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b6649430-bcca-4949-82d4-f15ac31f36e1-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.128827 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b6649430-bcca-4949-82d4-f15ac31f36e1-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.129061 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b6649430-bcca-4949-82d4-f15ac31f36e1-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.131243 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b6649430-bcca-4949-82d4-f15ac31f36e1-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.140774 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b6649430-bcca-4949-82d4-f15ac31f36e1-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.141582 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b6649430-bcca-4949-82d4-f15ac31f36e1-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.152906 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l65b\" (UniqueName: \"kubernetes.io/projected/b6649430-bcca-4949-82d4-f15ac31f36e1-kube-api-access-9l65b\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.166267 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"b6649430-bcca-4949-82d4-f15ac31f36e1\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.177374 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d65f7695-zgtjz"] Dec 06 06:43:14 crc kubenswrapper[4823]: W1206 06:43:14.202460 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8267e31c_32b9_4640_90f9_9078920f64d5.slice/crio-dc76ef8b599d262188d904374be3ef912e686daafa903daba3e9e88ae682fa4c WatchSource:0}: Error finding container dc76ef8b599d262188d904374be3ef912e686daafa903daba3e9e88ae682fa4c: Status 404 returned error can't find the container with id dc76ef8b599d262188d904374be3ef912e686daafa903daba3e9e88ae682fa4c Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.352572 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.462912 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.546067 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3c7ecce4-d359-486f-9386-057202b69efd","Type":"ContainerStarted","Data":"3d89300eb588a7ae70f6ff08e7c1b1382b862f8f83efaa74acb638169e50888a"} Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.549209 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"807fbfb1-90fe-4325-a0ac-09b309c77172","Type":"ContainerStarted","Data":"1ac9c0622b71d63b84e250947f1414dba4794b8cd98151e9987bede9c843a77c"} Dec 06 06:43:14 crc kubenswrapper[4823]: I1206 06:43:14.551411 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d65f7695-zgtjz" event={"ID":"8267e31c-32b9-4640-90f9-9078920f64d5","Type":"ContainerStarted","Data":"dc76ef8b599d262188d904374be3ef912e686daafa903daba3e9e88ae682fa4c"} Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.138605 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.154327 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.174103 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.174397 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.174496 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.174757 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-rtj5l" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.180072 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.188897 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.267875 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9da6c764-c7e5-4b0b-9d9f-8a5904f84187-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9da6c764-c7e5-4b0b-9d9f-8a5904f84187\") " pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.268051 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9da6c764-c7e5-4b0b-9d9f-8a5904f84187-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9da6c764-c7e5-4b0b-9d9f-8a5904f84187\") " pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.268080 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9da6c764-c7e5-4b0b-9d9f-8a5904f84187-kolla-config\") pod \"openstack-galera-0\" (UID: \"9da6c764-c7e5-4b0b-9d9f-8a5904f84187\") " pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.268101 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5nmg\" (UniqueName: \"kubernetes.io/projected/9da6c764-c7e5-4b0b-9d9f-8a5904f84187-kube-api-access-m5nmg\") pod \"openstack-galera-0\" (UID: \"9da6c764-c7e5-4b0b-9d9f-8a5904f84187\") " pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.268202 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9da6c764-c7e5-4b0b-9d9f-8a5904f84187-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9da6c764-c7e5-4b0b-9d9f-8a5904f84187\") " pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.268240 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9da6c764-c7e5-4b0b-9d9f-8a5904f84187-config-data-default\") pod \"openstack-galera-0\" (UID: \"9da6c764-c7e5-4b0b-9d9f-8a5904f84187\") " pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.268297 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"9da6c764-c7e5-4b0b-9d9f-8a5904f84187\") " pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.268335 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9da6c764-c7e5-4b0b-9d9f-8a5904f84187-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9da6c764-c7e5-4b0b-9d9f-8a5904f84187\") " pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.373125 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"9da6c764-c7e5-4b0b-9d9f-8a5904f84187\") " pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.373176 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9da6c764-c7e5-4b0b-9d9f-8a5904f84187-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9da6c764-c7e5-4b0b-9d9f-8a5904f84187\") " pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.373213 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9da6c764-c7e5-4b0b-9d9f-8a5904f84187-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9da6c764-c7e5-4b0b-9d9f-8a5904f84187\") " pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.373272 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9da6c764-c7e5-4b0b-9d9f-8a5904f84187-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9da6c764-c7e5-4b0b-9d9f-8a5904f84187\") " pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.373292 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5nmg\" (UniqueName: \"kubernetes.io/projected/9da6c764-c7e5-4b0b-9d9f-8a5904f84187-kube-api-access-m5nmg\") pod \"openstack-galera-0\" (UID: \"9da6c764-c7e5-4b0b-9d9f-8a5904f84187\") " pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.373307 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9da6c764-c7e5-4b0b-9d9f-8a5904f84187-kolla-config\") pod \"openstack-galera-0\" (UID: \"9da6c764-c7e5-4b0b-9d9f-8a5904f84187\") " pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.373344 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9da6c764-c7e5-4b0b-9d9f-8a5904f84187-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9da6c764-c7e5-4b0b-9d9f-8a5904f84187\") " pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.373364 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9da6c764-c7e5-4b0b-9d9f-8a5904f84187-config-data-default\") pod \"openstack-galera-0\" (UID: \"9da6c764-c7e5-4b0b-9d9f-8a5904f84187\") " pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.374463 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9da6c764-c7e5-4b0b-9d9f-8a5904f84187-config-data-default\") pod \"openstack-galera-0\" (UID: \"9da6c764-c7e5-4b0b-9d9f-8a5904f84187\") " pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.374757 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"9da6c764-c7e5-4b0b-9d9f-8a5904f84187\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.376027 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9da6c764-c7e5-4b0b-9d9f-8a5904f84187-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9da6c764-c7e5-4b0b-9d9f-8a5904f84187\") " pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.376871 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9da6c764-c7e5-4b0b-9d9f-8a5904f84187-kolla-config\") pod \"openstack-galera-0\" (UID: \"9da6c764-c7e5-4b0b-9d9f-8a5904f84187\") " pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.377159 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9da6c764-c7e5-4b0b-9d9f-8a5904f84187-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9da6c764-c7e5-4b0b-9d9f-8a5904f84187\") " pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.381945 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.391906 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9da6c764-c7e5-4b0b-9d9f-8a5904f84187-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9da6c764-c7e5-4b0b-9d9f-8a5904f84187\") " pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.394903 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9da6c764-c7e5-4b0b-9d9f-8a5904f84187-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9da6c764-c7e5-4b0b-9d9f-8a5904f84187\") " pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.396862 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5nmg\" (UniqueName: \"kubernetes.io/projected/9da6c764-c7e5-4b0b-9d9f-8a5904f84187-kube-api-access-m5nmg\") pod \"openstack-galera-0\" (UID: \"9da6c764-c7e5-4b0b-9d9f-8a5904f84187\") " pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: W1206 06:43:15.407252 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6649430_bcca_4949_82d4_f15ac31f36e1.slice/crio-013fbfccde905fccbc3acb56daa4e72bfb15c3313c62f8434bd8682a87c461d7 WatchSource:0}: Error finding container 013fbfccde905fccbc3acb56daa4e72bfb15c3313c62f8434bd8682a87c461d7: Status 404 returned error can't find the container with id 013fbfccde905fccbc3acb56daa4e72bfb15c3313c62f8434bd8682a87c461d7 Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.416552 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"9da6c764-c7e5-4b0b-9d9f-8a5904f84187\") " pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.512861 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 06 06:43:15 crc kubenswrapper[4823]: I1206 06:43:15.567933 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"b6649430-bcca-4949-82d4-f15ac31f36e1","Type":"ContainerStarted","Data":"013fbfccde905fccbc3acb56daa4e72bfb15c3313c62f8434bd8682a87c461d7"} Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.286905 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.303615 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.304877 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.307534 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-plqfg" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.307934 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.308208 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.308309 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.430490 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e707833-acd6-49f7-91f7-a3ddd3a40119-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8e707833-acd6-49f7-91f7-a3ddd3a40119\") " pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.432522 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8e707833-acd6-49f7-91f7-a3ddd3a40119-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8e707833-acd6-49f7-91f7-a3ddd3a40119\") " pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.432570 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8e707833-acd6-49f7-91f7-a3ddd3a40119\") " pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.432621 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e707833-acd6-49f7-91f7-a3ddd3a40119-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8e707833-acd6-49f7-91f7-a3ddd3a40119\") " pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.432758 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e707833-acd6-49f7-91f7-a3ddd3a40119-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8e707833-acd6-49f7-91f7-a3ddd3a40119\") " pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.432803 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjdxc\" (UniqueName: \"kubernetes.io/projected/8e707833-acd6-49f7-91f7-a3ddd3a40119-kube-api-access-xjdxc\") pod \"openstack-cell1-galera-0\" (UID: \"8e707833-acd6-49f7-91f7-a3ddd3a40119\") " pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.432896 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8e707833-acd6-49f7-91f7-a3ddd3a40119-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8e707833-acd6-49f7-91f7-a3ddd3a40119\") " pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.432940 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8e707833-acd6-49f7-91f7-a3ddd3a40119-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8e707833-acd6-49f7-91f7-a3ddd3a40119\") " pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.534419 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e707833-acd6-49f7-91f7-a3ddd3a40119-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8e707833-acd6-49f7-91f7-a3ddd3a40119\") " pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.534506 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e707833-acd6-49f7-91f7-a3ddd3a40119-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8e707833-acd6-49f7-91f7-a3ddd3a40119\") " pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.534544 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjdxc\" (UniqueName: \"kubernetes.io/projected/8e707833-acd6-49f7-91f7-a3ddd3a40119-kube-api-access-xjdxc\") pod \"openstack-cell1-galera-0\" (UID: \"8e707833-acd6-49f7-91f7-a3ddd3a40119\") " pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.534616 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8e707833-acd6-49f7-91f7-a3ddd3a40119-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8e707833-acd6-49f7-91f7-a3ddd3a40119\") " pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.534653 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8e707833-acd6-49f7-91f7-a3ddd3a40119-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8e707833-acd6-49f7-91f7-a3ddd3a40119\") " pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.534721 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e707833-acd6-49f7-91f7-a3ddd3a40119-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8e707833-acd6-49f7-91f7-a3ddd3a40119\") " pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.534750 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8e707833-acd6-49f7-91f7-a3ddd3a40119-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8e707833-acd6-49f7-91f7-a3ddd3a40119\") " pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.534785 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8e707833-acd6-49f7-91f7-a3ddd3a40119\") " pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.535073 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8e707833-acd6-49f7-91f7-a3ddd3a40119\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.536736 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8e707833-acd6-49f7-91f7-a3ddd3a40119-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8e707833-acd6-49f7-91f7-a3ddd3a40119\") " pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.537575 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8e707833-acd6-49f7-91f7-a3ddd3a40119-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8e707833-acd6-49f7-91f7-a3ddd3a40119\") " pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.537745 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8e707833-acd6-49f7-91f7-a3ddd3a40119-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8e707833-acd6-49f7-91f7-a3ddd3a40119\") " pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.538492 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e707833-acd6-49f7-91f7-a3ddd3a40119-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8e707833-acd6-49f7-91f7-a3ddd3a40119\") " pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.545153 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e707833-acd6-49f7-91f7-a3ddd3a40119-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8e707833-acd6-49f7-91f7-a3ddd3a40119\") " pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.556137 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjdxc\" (UniqueName: \"kubernetes.io/projected/8e707833-acd6-49f7-91f7-a3ddd3a40119-kube-api-access-xjdxc\") pod \"openstack-cell1-galera-0\" (UID: \"8e707833-acd6-49f7-91f7-a3ddd3a40119\") " pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.558833 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e707833-acd6-49f7-91f7-a3ddd3a40119-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8e707833-acd6-49f7-91f7-a3ddd3a40119\") " pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.628875 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8e707833-acd6-49f7-91f7-a3ddd3a40119\") " pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:16 crc kubenswrapper[4823]: I1206 06:43:16.911122 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 06 06:43:17 crc kubenswrapper[4823]: I1206 06:43:17.312231 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 06 06:43:17 crc kubenswrapper[4823]: I1206 06:43:17.314688 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 06 06:43:17 crc kubenswrapper[4823]: I1206 06:43:17.322187 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-nlxb8" Dec 06 06:43:17 crc kubenswrapper[4823]: I1206 06:43:17.322429 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 06 06:43:17 crc kubenswrapper[4823]: I1206 06:43:17.322541 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 06 06:43:17 crc kubenswrapper[4823]: I1206 06:43:17.324294 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 06 06:43:17 crc kubenswrapper[4823]: I1206 06:43:17.332993 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 06 06:43:17 crc kubenswrapper[4823]: I1206 06:43:17.493581 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06a30022-1a67-4812-941e-3118f3767d35-kolla-config\") pod \"memcached-0\" (UID: \"06a30022-1a67-4812-941e-3118f3767d35\") " pod="openstack/memcached-0" Dec 06 06:43:17 crc kubenswrapper[4823]: I1206 06:43:17.493651 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06a30022-1a67-4812-941e-3118f3767d35-config-data\") pod \"memcached-0\" (UID: \"06a30022-1a67-4812-941e-3118f3767d35\") " pod="openstack/memcached-0" Dec 06 06:43:17 crc kubenswrapper[4823]: I1206 06:43:17.493691 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgcd4\" (UniqueName: \"kubernetes.io/projected/06a30022-1a67-4812-941e-3118f3767d35-kube-api-access-bgcd4\") pod \"memcached-0\" (UID: \"06a30022-1a67-4812-941e-3118f3767d35\") " pod="openstack/memcached-0" Dec 06 06:43:17 crc kubenswrapper[4823]: I1206 06:43:17.493832 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a30022-1a67-4812-941e-3118f3767d35-combined-ca-bundle\") pod \"memcached-0\" (UID: \"06a30022-1a67-4812-941e-3118f3767d35\") " pod="openstack/memcached-0" Dec 06 06:43:17 crc kubenswrapper[4823]: I1206 06:43:17.493862 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/06a30022-1a67-4812-941e-3118f3767d35-memcached-tls-certs\") pod \"memcached-0\" (UID: \"06a30022-1a67-4812-941e-3118f3767d35\") " pod="openstack/memcached-0" Dec 06 06:43:17 crc kubenswrapper[4823]: I1206 06:43:17.597618 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06a30022-1a67-4812-941e-3118f3767d35-config-data\") pod \"memcached-0\" (UID: \"06a30022-1a67-4812-941e-3118f3767d35\") " pod="openstack/memcached-0" Dec 06 06:43:17 crc kubenswrapper[4823]: I1206 06:43:17.597781 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgcd4\" (UniqueName: \"kubernetes.io/projected/06a30022-1a67-4812-941e-3118f3767d35-kube-api-access-bgcd4\") pod \"memcached-0\" (UID: \"06a30022-1a67-4812-941e-3118f3767d35\") " pod="openstack/memcached-0" Dec 06 06:43:17 crc kubenswrapper[4823]: I1206 06:43:17.597884 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a30022-1a67-4812-941e-3118f3767d35-combined-ca-bundle\") pod \"memcached-0\" (UID: \"06a30022-1a67-4812-941e-3118f3767d35\") " pod="openstack/memcached-0" Dec 06 06:43:17 crc kubenswrapper[4823]: I1206 06:43:17.597919 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/06a30022-1a67-4812-941e-3118f3767d35-memcached-tls-certs\") pod \"memcached-0\" (UID: \"06a30022-1a67-4812-941e-3118f3767d35\") " pod="openstack/memcached-0" Dec 06 06:43:17 crc kubenswrapper[4823]: I1206 06:43:17.597969 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06a30022-1a67-4812-941e-3118f3767d35-kolla-config\") pod \"memcached-0\" (UID: \"06a30022-1a67-4812-941e-3118f3767d35\") " pod="openstack/memcached-0" Dec 06 06:43:17 crc kubenswrapper[4823]: I1206 06:43:17.598714 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06a30022-1a67-4812-941e-3118f3767d35-kolla-config\") pod \"memcached-0\" (UID: \"06a30022-1a67-4812-941e-3118f3767d35\") " pod="openstack/memcached-0" Dec 06 06:43:17 crc kubenswrapper[4823]: I1206 06:43:17.599618 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06a30022-1a67-4812-941e-3118f3767d35-config-data\") pod \"memcached-0\" (UID: \"06a30022-1a67-4812-941e-3118f3767d35\") " pod="openstack/memcached-0" Dec 06 06:43:17 crc kubenswrapper[4823]: I1206 06:43:17.603303 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a30022-1a67-4812-941e-3118f3767d35-combined-ca-bundle\") pod \"memcached-0\" (UID: \"06a30022-1a67-4812-941e-3118f3767d35\") " pod="openstack/memcached-0" Dec 06 06:43:17 crc kubenswrapper[4823]: I1206 06:43:17.618106 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/06a30022-1a67-4812-941e-3118f3767d35-memcached-tls-certs\") pod \"memcached-0\" (UID: \"06a30022-1a67-4812-941e-3118f3767d35\") " pod="openstack/memcached-0" Dec 06 06:43:17 crc kubenswrapper[4823]: I1206 06:43:17.628226 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgcd4\" (UniqueName: \"kubernetes.io/projected/06a30022-1a67-4812-941e-3118f3767d35-kube-api-access-bgcd4\") pod \"memcached-0\" (UID: \"06a30022-1a67-4812-941e-3118f3767d35\") " pod="openstack/memcached-0" Dec 06 06:43:17 crc kubenswrapper[4823]: I1206 06:43:17.665955 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 06 06:43:17 crc kubenswrapper[4823]: I1206 06:43:17.689131 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9da6c764-c7e5-4b0b-9d9f-8a5904f84187","Type":"ContainerStarted","Data":"a3a8b16307148362ce0b566abe504183937fb40b2c814afbcdc5610c25e4b500"} Dec 06 06:43:17 crc kubenswrapper[4823]: I1206 06:43:17.942253 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 06 06:43:18 crc kubenswrapper[4823]: I1206 06:43:18.666840 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 06 06:43:18 crc kubenswrapper[4823]: W1206 06:43:18.744252 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06a30022_1a67_4812_941e_3118f3767d35.slice/crio-d995569d9e85dacfee91e8cb3600a9571a2783c2ce007c93d049f392c9b34822 WatchSource:0}: Error finding container d995569d9e85dacfee91e8cb3600a9571a2783c2ce007c93d049f392c9b34822: Status 404 returned error can't find the container with id d995569d9e85dacfee91e8cb3600a9571a2783c2ce007c93d049f392c9b34822 Dec 06 06:43:18 crc kubenswrapper[4823]: I1206 06:43:18.835099 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8e707833-acd6-49f7-91f7-a3ddd3a40119","Type":"ContainerStarted","Data":"3ad08b7d990b3333bb2eb34306f3c4230ea902904fd28afce09130bce7b3e44f"} Dec 06 06:43:19 crc kubenswrapper[4823]: I1206 06:43:19.205595 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 06:43:19 crc kubenswrapper[4823]: I1206 06:43:19.206708 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 06:43:19 crc kubenswrapper[4823]: I1206 06:43:19.206803 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 06:43:19 crc kubenswrapper[4823]: I1206 06:43:19.223743 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-rkffj" Dec 06 06:43:19 crc kubenswrapper[4823]: I1206 06:43:19.437363 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z9lr\" (UniqueName: \"kubernetes.io/projected/de2d0c7c-d378-4a38-956d-56a576de5c21-kube-api-access-5z9lr\") pod \"kube-state-metrics-0\" (UID: \"de2d0c7c-d378-4a38-956d-56a576de5c21\") " pod="openstack/kube-state-metrics-0" Dec 06 06:43:19 crc kubenswrapper[4823]: I1206 06:43:19.538464 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z9lr\" (UniqueName: \"kubernetes.io/projected/de2d0c7c-d378-4a38-956d-56a576de5c21-kube-api-access-5z9lr\") pod \"kube-state-metrics-0\" (UID: \"de2d0c7c-d378-4a38-956d-56a576de5c21\") " pod="openstack/kube-state-metrics-0" Dec 06 06:43:19 crc kubenswrapper[4823]: I1206 06:43:19.591167 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z9lr\" (UniqueName: \"kubernetes.io/projected/de2d0c7c-d378-4a38-956d-56a576de5c21-kube-api-access-5z9lr\") pod \"kube-state-metrics-0\" (UID: \"de2d0c7c-d378-4a38-956d-56a576de5c21\") " pod="openstack/kube-state-metrics-0" Dec 06 06:43:19 crc kubenswrapper[4823]: I1206 06:43:19.658366 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 06:43:19 crc kubenswrapper[4823]: I1206 06:43:19.867544 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"06a30022-1a67-4812-941e-3118f3767d35","Type":"ContainerStarted","Data":"d995569d9e85dacfee91e8cb3600a9571a2783c2ce007c93d049f392c9b34822"} Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.548008 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.552623 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.556258 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.556532 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.556874 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.556885 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-ts4xm" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.556932 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.559836 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.563649 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.671353 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d40b985e-9817-453a-8a4b-72d7eadf4683-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.671462 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d40b985e-9817-453a-8a4b-72d7eadf4683-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.671623 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phjxz\" (UniqueName: \"kubernetes.io/projected/d40b985e-9817-453a-8a4b-72d7eadf4683-kube-api-access-phjxz\") pod \"prometheus-metric-storage-0\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.671787 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d40b985e-9817-453a-8a4b-72d7eadf4683-config\") pod \"prometheus-metric-storage-0\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.671854 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d40b985e-9817-453a-8a4b-72d7eadf4683-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.671904 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\") pod \"prometheus-metric-storage-0\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.671951 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d40b985e-9817-453a-8a4b-72d7eadf4683-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.672012 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d40b985e-9817-453a-8a4b-72d7eadf4683-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.774110 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d40b985e-9817-453a-8a4b-72d7eadf4683-config\") pod \"prometheus-metric-storage-0\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.774578 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d40b985e-9817-453a-8a4b-72d7eadf4683-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.774637 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\") pod \"prometheus-metric-storage-0\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.774711 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d40b985e-9817-453a-8a4b-72d7eadf4683-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.774787 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d40b985e-9817-453a-8a4b-72d7eadf4683-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.774895 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d40b985e-9817-453a-8a4b-72d7eadf4683-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.774955 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d40b985e-9817-453a-8a4b-72d7eadf4683-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.774998 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phjxz\" (UniqueName: \"kubernetes.io/projected/d40b985e-9817-453a-8a4b-72d7eadf4683-kube-api-access-phjxz\") pod \"prometheus-metric-storage-0\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.786639 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d40b985e-9817-453a-8a4b-72d7eadf4683-config\") pod \"prometheus-metric-storage-0\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.789627 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d40b985e-9817-453a-8a4b-72d7eadf4683-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.791315 4823 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.791361 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\") pod \"prometheus-metric-storage-0\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fab8261b70d6f995dab453a667c3bae61bb90c651f0d61d1c06bd0698dff1b77/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.796389 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d40b985e-9817-453a-8a4b-72d7eadf4683-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.803784 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d40b985e-9817-453a-8a4b-72d7eadf4683-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.813864 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phjxz\" (UniqueName: \"kubernetes.io/projected/d40b985e-9817-453a-8a4b-72d7eadf4683-kube-api-access-phjxz\") pod \"prometheus-metric-storage-0\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.814544 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d40b985e-9817-453a-8a4b-72d7eadf4683-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:20 crc kubenswrapper[4823]: I1206 06:43:20.821446 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d40b985e-9817-453a-8a4b-72d7eadf4683-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:21 crc kubenswrapper[4823]: I1206 06:43:21.146062 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\") pod \"prometheus-metric-storage-0\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:21 crc kubenswrapper[4823]: I1206 06:43:21.198025 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 06 06:43:21 crc kubenswrapper[4823]: W1206 06:43:21.449090 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde2d0c7c_d378_4a38_956d_56a576de5c21.slice/crio-845fb7b05747f3c3fd4d56b1312f7e600e52397345780ad4f8c6b3e541a7da24 WatchSource:0}: Error finding container 845fb7b05747f3c3fd4d56b1312f7e600e52397345780ad4f8c6b3e541a7da24: Status 404 returned error can't find the container with id 845fb7b05747f3c3fd4d56b1312f7e600e52397345780ad4f8c6b3e541a7da24 Dec 06 06:43:21 crc kubenswrapper[4823]: I1206 06:43:21.449410 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.149353 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"de2d0c7c-d378-4a38-956d-56a576de5c21","Type":"ContainerStarted","Data":"845fb7b05747f3c3fd4d56b1312f7e600e52397345780ad4f8c6b3e541a7da24"} Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.734899 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 06:43:22 crc kubenswrapper[4823]: W1206 06:43:22.770116 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd40b985e_9817_453a_8a4b_72d7eadf4683.slice/crio-4414f2038df659871e550ed821f8f39684a11c499553efafb3c3fe67f4c32eb7 WatchSource:0}: Error finding container 4414f2038df659871e550ed821f8f39684a11c499553efafb3c3fe67f4c32eb7: Status 404 returned error can't find the container with id 4414f2038df659871e550ed821f8f39684a11c499553efafb3c3fe67f4c32eb7 Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.825898 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-94t86"] Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.830963 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-94t86" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.837126 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.837396 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-7c6td" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.840507 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.852815 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-94t86"] Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.853331 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/afe6c323-7053-4b9e-af90-27bb99d99ae3-ovn-controller-tls-certs\") pod \"ovn-controller-94t86\" (UID: \"afe6c323-7053-4b9e-af90-27bb99d99ae3\") " pod="openstack/ovn-controller-94t86" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.853456 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afe6c323-7053-4b9e-af90-27bb99d99ae3-scripts\") pod \"ovn-controller-94t86\" (UID: \"afe6c323-7053-4b9e-af90-27bb99d99ae3\") " pod="openstack/ovn-controller-94t86" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.853575 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vtnb\" (UniqueName: \"kubernetes.io/projected/afe6c323-7053-4b9e-af90-27bb99d99ae3-kube-api-access-7vtnb\") pod \"ovn-controller-94t86\" (UID: \"afe6c323-7053-4b9e-af90-27bb99d99ae3\") " pod="openstack/ovn-controller-94t86" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.853681 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe6c323-7053-4b9e-af90-27bb99d99ae3-combined-ca-bundle\") pod \"ovn-controller-94t86\" (UID: \"afe6c323-7053-4b9e-af90-27bb99d99ae3\") " pod="openstack/ovn-controller-94t86" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.853760 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/afe6c323-7053-4b9e-af90-27bb99d99ae3-var-run-ovn\") pod \"ovn-controller-94t86\" (UID: \"afe6c323-7053-4b9e-af90-27bb99d99ae3\") " pod="openstack/ovn-controller-94t86" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.853875 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/afe6c323-7053-4b9e-af90-27bb99d99ae3-var-run\") pod \"ovn-controller-94t86\" (UID: \"afe6c323-7053-4b9e-af90-27bb99d99ae3\") " pod="openstack/ovn-controller-94t86" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.853960 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/afe6c323-7053-4b9e-af90-27bb99d99ae3-var-log-ovn\") pod \"ovn-controller-94t86\" (UID: \"afe6c323-7053-4b9e-af90-27bb99d99ae3\") " pod="openstack/ovn-controller-94t86" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.866847 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-s2c88"] Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.868625 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-s2c88" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.884631 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-s2c88"] Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.955708 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5dfb6e3c-4b92-4e55-9c69-679dc2326717-scripts\") pod \"ovn-controller-ovs-s2c88\" (UID: \"5dfb6e3c-4b92-4e55-9c69-679dc2326717\") " pod="openstack/ovn-controller-ovs-s2c88" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.955799 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vtnb\" (UniqueName: \"kubernetes.io/projected/afe6c323-7053-4b9e-af90-27bb99d99ae3-kube-api-access-7vtnb\") pod \"ovn-controller-94t86\" (UID: \"afe6c323-7053-4b9e-af90-27bb99d99ae3\") " pod="openstack/ovn-controller-94t86" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.955847 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe6c323-7053-4b9e-af90-27bb99d99ae3-combined-ca-bundle\") pod \"ovn-controller-94t86\" (UID: \"afe6c323-7053-4b9e-af90-27bb99d99ae3\") " pod="openstack/ovn-controller-94t86" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.955891 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/afe6c323-7053-4b9e-af90-27bb99d99ae3-var-run-ovn\") pod \"ovn-controller-94t86\" (UID: \"afe6c323-7053-4b9e-af90-27bb99d99ae3\") " pod="openstack/ovn-controller-94t86" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.955935 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5dfb6e3c-4b92-4e55-9c69-679dc2326717-var-log\") pod \"ovn-controller-ovs-s2c88\" (UID: \"5dfb6e3c-4b92-4e55-9c69-679dc2326717\") " pod="openstack/ovn-controller-ovs-s2c88" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.955962 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/afe6c323-7053-4b9e-af90-27bb99d99ae3-var-run\") pod \"ovn-controller-94t86\" (UID: \"afe6c323-7053-4b9e-af90-27bb99d99ae3\") " pod="openstack/ovn-controller-94t86" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.956010 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/afe6c323-7053-4b9e-af90-27bb99d99ae3-var-log-ovn\") pod \"ovn-controller-94t86\" (UID: \"afe6c323-7053-4b9e-af90-27bb99d99ae3\") " pod="openstack/ovn-controller-94t86" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.956040 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/afe6c323-7053-4b9e-af90-27bb99d99ae3-ovn-controller-tls-certs\") pod \"ovn-controller-94t86\" (UID: \"afe6c323-7053-4b9e-af90-27bb99d99ae3\") " pod="openstack/ovn-controller-94t86" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.956160 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5dfb6e3c-4b92-4e55-9c69-679dc2326717-etc-ovs\") pod \"ovn-controller-ovs-s2c88\" (UID: \"5dfb6e3c-4b92-4e55-9c69-679dc2326717\") " pod="openstack/ovn-controller-ovs-s2c88" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.956207 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5dfb6e3c-4b92-4e55-9c69-679dc2326717-var-run\") pod \"ovn-controller-ovs-s2c88\" (UID: \"5dfb6e3c-4b92-4e55-9c69-679dc2326717\") " pod="openstack/ovn-controller-ovs-s2c88" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.956233 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afe6c323-7053-4b9e-af90-27bb99d99ae3-scripts\") pod \"ovn-controller-94t86\" (UID: \"afe6c323-7053-4b9e-af90-27bb99d99ae3\") " pod="openstack/ovn-controller-94t86" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.956274 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5dfb6e3c-4b92-4e55-9c69-679dc2326717-var-lib\") pod \"ovn-controller-ovs-s2c88\" (UID: \"5dfb6e3c-4b92-4e55-9c69-679dc2326717\") " pod="openstack/ovn-controller-ovs-s2c88" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.956301 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h97g\" (UniqueName: \"kubernetes.io/projected/5dfb6e3c-4b92-4e55-9c69-679dc2326717-kube-api-access-5h97g\") pod \"ovn-controller-ovs-s2c88\" (UID: \"5dfb6e3c-4b92-4e55-9c69-679dc2326717\") " pod="openstack/ovn-controller-ovs-s2c88" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.957019 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/afe6c323-7053-4b9e-af90-27bb99d99ae3-var-log-ovn\") pod \"ovn-controller-94t86\" (UID: \"afe6c323-7053-4b9e-af90-27bb99d99ae3\") " pod="openstack/ovn-controller-94t86" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.957891 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/afe6c323-7053-4b9e-af90-27bb99d99ae3-var-run-ovn\") pod \"ovn-controller-94t86\" (UID: \"afe6c323-7053-4b9e-af90-27bb99d99ae3\") " pod="openstack/ovn-controller-94t86" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.958020 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/afe6c323-7053-4b9e-af90-27bb99d99ae3-var-run\") pod \"ovn-controller-94t86\" (UID: \"afe6c323-7053-4b9e-af90-27bb99d99ae3\") " pod="openstack/ovn-controller-94t86" Dec 06 06:43:22 crc kubenswrapper[4823]: I1206 06:43:22.962869 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afe6c323-7053-4b9e-af90-27bb99d99ae3-scripts\") pod \"ovn-controller-94t86\" (UID: \"afe6c323-7053-4b9e-af90-27bb99d99ae3\") " pod="openstack/ovn-controller-94t86" Dec 06 06:43:23 crc kubenswrapper[4823]: I1206 06:43:23.004393 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/afe6c323-7053-4b9e-af90-27bb99d99ae3-ovn-controller-tls-certs\") pod \"ovn-controller-94t86\" (UID: \"afe6c323-7053-4b9e-af90-27bb99d99ae3\") " pod="openstack/ovn-controller-94t86" Dec 06 06:43:23 crc kubenswrapper[4823]: I1206 06:43:23.004823 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe6c323-7053-4b9e-af90-27bb99d99ae3-combined-ca-bundle\") pod \"ovn-controller-94t86\" (UID: \"afe6c323-7053-4b9e-af90-27bb99d99ae3\") " pod="openstack/ovn-controller-94t86" Dec 06 06:43:23 crc kubenswrapper[4823]: I1206 06:43:23.021408 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vtnb\" (UniqueName: \"kubernetes.io/projected/afe6c323-7053-4b9e-af90-27bb99d99ae3-kube-api-access-7vtnb\") pod \"ovn-controller-94t86\" (UID: \"afe6c323-7053-4b9e-af90-27bb99d99ae3\") " pod="openstack/ovn-controller-94t86" Dec 06 06:43:23 crc kubenswrapper[4823]: I1206 06:43:23.057176 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5dfb6e3c-4b92-4e55-9c69-679dc2326717-var-log\") pod \"ovn-controller-ovs-s2c88\" (UID: \"5dfb6e3c-4b92-4e55-9c69-679dc2326717\") " pod="openstack/ovn-controller-ovs-s2c88" Dec 06 06:43:23 crc kubenswrapper[4823]: I1206 06:43:23.057258 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5dfb6e3c-4b92-4e55-9c69-679dc2326717-etc-ovs\") pod \"ovn-controller-ovs-s2c88\" (UID: \"5dfb6e3c-4b92-4e55-9c69-679dc2326717\") " pod="openstack/ovn-controller-ovs-s2c88" Dec 06 06:43:23 crc kubenswrapper[4823]: I1206 06:43:23.057289 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5dfb6e3c-4b92-4e55-9c69-679dc2326717-var-run\") pod \"ovn-controller-ovs-s2c88\" (UID: \"5dfb6e3c-4b92-4e55-9c69-679dc2326717\") " pod="openstack/ovn-controller-ovs-s2c88" Dec 06 06:43:23 crc kubenswrapper[4823]: I1206 06:43:23.057321 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5dfb6e3c-4b92-4e55-9c69-679dc2326717-var-lib\") pod \"ovn-controller-ovs-s2c88\" (UID: \"5dfb6e3c-4b92-4e55-9c69-679dc2326717\") " pod="openstack/ovn-controller-ovs-s2c88" Dec 06 06:43:23 crc kubenswrapper[4823]: I1206 06:43:23.057355 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h97g\" (UniqueName: \"kubernetes.io/projected/5dfb6e3c-4b92-4e55-9c69-679dc2326717-kube-api-access-5h97g\") pod \"ovn-controller-ovs-s2c88\" (UID: \"5dfb6e3c-4b92-4e55-9c69-679dc2326717\") " pod="openstack/ovn-controller-ovs-s2c88" Dec 06 06:43:23 crc kubenswrapper[4823]: I1206 06:43:23.057415 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5dfb6e3c-4b92-4e55-9c69-679dc2326717-scripts\") pod \"ovn-controller-ovs-s2c88\" (UID: \"5dfb6e3c-4b92-4e55-9c69-679dc2326717\") " pod="openstack/ovn-controller-ovs-s2c88" Dec 06 06:43:23 crc kubenswrapper[4823]: I1206 06:43:23.059799 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5dfb6e3c-4b92-4e55-9c69-679dc2326717-scripts\") pod \"ovn-controller-ovs-s2c88\" (UID: \"5dfb6e3c-4b92-4e55-9c69-679dc2326717\") " pod="openstack/ovn-controller-ovs-s2c88" Dec 06 06:43:23 crc kubenswrapper[4823]: I1206 06:43:23.059943 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5dfb6e3c-4b92-4e55-9c69-679dc2326717-var-log\") pod \"ovn-controller-ovs-s2c88\" (UID: \"5dfb6e3c-4b92-4e55-9c69-679dc2326717\") " pod="openstack/ovn-controller-ovs-s2c88" Dec 06 06:43:23 crc kubenswrapper[4823]: I1206 06:43:23.060140 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5dfb6e3c-4b92-4e55-9c69-679dc2326717-etc-ovs\") pod \"ovn-controller-ovs-s2c88\" (UID: \"5dfb6e3c-4b92-4e55-9c69-679dc2326717\") " pod="openstack/ovn-controller-ovs-s2c88" Dec 06 06:43:23 crc kubenswrapper[4823]: I1206 06:43:23.060197 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5dfb6e3c-4b92-4e55-9c69-679dc2326717-var-run\") pod \"ovn-controller-ovs-s2c88\" (UID: \"5dfb6e3c-4b92-4e55-9c69-679dc2326717\") " pod="openstack/ovn-controller-ovs-s2c88" Dec 06 06:43:23 crc kubenswrapper[4823]: I1206 06:43:23.060340 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5dfb6e3c-4b92-4e55-9c69-679dc2326717-var-lib\") pod \"ovn-controller-ovs-s2c88\" (UID: \"5dfb6e3c-4b92-4e55-9c69-679dc2326717\") " pod="openstack/ovn-controller-ovs-s2c88" Dec 06 06:43:23 crc kubenswrapper[4823]: I1206 06:43:23.091952 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h97g\" (UniqueName: \"kubernetes.io/projected/5dfb6e3c-4b92-4e55-9c69-679dc2326717-kube-api-access-5h97g\") pod \"ovn-controller-ovs-s2c88\" (UID: \"5dfb6e3c-4b92-4e55-9c69-679dc2326717\") " pod="openstack/ovn-controller-ovs-s2c88" Dec 06 06:43:23 crc kubenswrapper[4823]: I1206 06:43:23.178486 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-94t86" Dec 06 06:43:23 crc kubenswrapper[4823]: I1206 06:43:23.198785 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-s2c88" Dec 06 06:43:23 crc kubenswrapper[4823]: I1206 06:43:23.234454 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d40b985e-9817-453a-8a4b-72d7eadf4683","Type":"ContainerStarted","Data":"4414f2038df659871e550ed821f8f39684a11c499553efafb3c3fe67f4c32eb7"} Dec 06 06:43:23 crc kubenswrapper[4823]: I1206 06:43:23.822127 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 06 06:43:23 crc kubenswrapper[4823]: I1206 06:43:23.825732 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:23 crc kubenswrapper[4823]: I1206 06:43:23.833003 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 06 06:43:23 crc kubenswrapper[4823]: I1206 06:43:23.850511 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 06 06:43:23 crc kubenswrapper[4823]: I1206 06:43:23.850580 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 06 06:43:23 crc kubenswrapper[4823]: I1206 06:43:23.851309 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-n6sdr" Dec 06 06:43:23 crc kubenswrapper[4823]: I1206 06:43:23.851512 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 06 06:43:23 crc kubenswrapper[4823]: I1206 06:43:23.851710 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.184981 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db13557-99bd-4223-a8f1-53de273f6ba3-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0db13557-99bd-4223-a8f1-53de273f6ba3\") " pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.185063 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0db13557-99bd-4223-a8f1-53de273f6ba3-config\") pod \"ovsdbserver-sb-0\" (UID: \"0db13557-99bd-4223-a8f1-53de273f6ba3\") " pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.185092 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0db13557-99bd-4223-a8f1-53de273f6ba3-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0db13557-99bd-4223-a8f1-53de273f6ba3\") " pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.185132 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0db13557-99bd-4223-a8f1-53de273f6ba3-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0db13557-99bd-4223-a8f1-53de273f6ba3\") " pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.185161 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0db13557-99bd-4223-a8f1-53de273f6ba3-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0db13557-99bd-4223-a8f1-53de273f6ba3\") " pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.185194 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0db13557-99bd-4223-a8f1-53de273f6ba3-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0db13557-99bd-4223-a8f1-53de273f6ba3\") " pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.185225 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0db13557-99bd-4223-a8f1-53de273f6ba3\") " pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.185251 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rfk7\" (UniqueName: \"kubernetes.io/projected/0db13557-99bd-4223-a8f1-53de273f6ba3-kube-api-access-7rfk7\") pod \"ovsdbserver-sb-0\" (UID: \"0db13557-99bd-4223-a8f1-53de273f6ba3\") " pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.294493 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rfk7\" (UniqueName: \"kubernetes.io/projected/0db13557-99bd-4223-a8f1-53de273f6ba3-kube-api-access-7rfk7\") pod \"ovsdbserver-sb-0\" (UID: \"0db13557-99bd-4223-a8f1-53de273f6ba3\") " pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.294576 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db13557-99bd-4223-a8f1-53de273f6ba3-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0db13557-99bd-4223-a8f1-53de273f6ba3\") " pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.294644 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0db13557-99bd-4223-a8f1-53de273f6ba3-config\") pod \"ovsdbserver-sb-0\" (UID: \"0db13557-99bd-4223-a8f1-53de273f6ba3\") " pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.294686 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0db13557-99bd-4223-a8f1-53de273f6ba3-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0db13557-99bd-4223-a8f1-53de273f6ba3\") " pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.294759 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0db13557-99bd-4223-a8f1-53de273f6ba3-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0db13557-99bd-4223-a8f1-53de273f6ba3\") " pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.294804 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0db13557-99bd-4223-a8f1-53de273f6ba3-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0db13557-99bd-4223-a8f1-53de273f6ba3\") " pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.294878 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0db13557-99bd-4223-a8f1-53de273f6ba3-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0db13557-99bd-4223-a8f1-53de273f6ba3\") " pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.294941 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0db13557-99bd-4223-a8f1-53de273f6ba3\") " pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.295480 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0db13557-99bd-4223-a8f1-53de273f6ba3\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.299157 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0db13557-99bd-4223-a8f1-53de273f6ba3-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0db13557-99bd-4223-a8f1-53de273f6ba3\") " pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.300887 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0db13557-99bd-4223-a8f1-53de273f6ba3-config\") pod \"ovsdbserver-sb-0\" (UID: \"0db13557-99bd-4223-a8f1-53de273f6ba3\") " pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.301248 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0db13557-99bd-4223-a8f1-53de273f6ba3-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0db13557-99bd-4223-a8f1-53de273f6ba3\") " pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.302990 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db13557-99bd-4223-a8f1-53de273f6ba3-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0db13557-99bd-4223-a8f1-53de273f6ba3\") " pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.304907 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0db13557-99bd-4223-a8f1-53de273f6ba3-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0db13557-99bd-4223-a8f1-53de273f6ba3\") " pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.323504 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rfk7\" (UniqueName: \"kubernetes.io/projected/0db13557-99bd-4223-a8f1-53de273f6ba3-kube-api-access-7rfk7\") pod \"ovsdbserver-sb-0\" (UID: \"0db13557-99bd-4223-a8f1-53de273f6ba3\") " pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.325049 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0db13557-99bd-4223-a8f1-53de273f6ba3-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0db13557-99bd-4223-a8f1-53de273f6ba3\") " pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.339729 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0db13557-99bd-4223-a8f1-53de273f6ba3\") " pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.465467 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-94t86"] Dec 06 06:43:24 crc kubenswrapper[4823]: I1206 06:43:24.476958 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 06 06:43:24 crc kubenswrapper[4823]: W1206 06:43:24.524910 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafe6c323_7053_4b9e_af90_27bb99d99ae3.slice/crio-d783d502e961d9dd44ba2239af9e0084adc7f6b4f36a38fa3c9ace59d1ee2a3e WatchSource:0}: Error finding container d783d502e961d9dd44ba2239af9e0084adc7f6b4f36a38fa3c9ace59d1ee2a3e: Status 404 returned error can't find the container with id d783d502e961d9dd44ba2239af9e0084adc7f6b4f36a38fa3c9ace59d1ee2a3e Dec 06 06:43:25 crc kubenswrapper[4823]: I1206 06:43:25.366885 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-94t86" event={"ID":"afe6c323-7053-4b9e-af90-27bb99d99ae3","Type":"ContainerStarted","Data":"d783d502e961d9dd44ba2239af9e0084adc7f6b4f36a38fa3c9ace59d1ee2a3e"} Dec 06 06:43:25 crc kubenswrapper[4823]: I1206 06:43:25.535506 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-s2c88"] Dec 06 06:43:26 crc kubenswrapper[4823]: I1206 06:43:26.023701 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-zv8r9"] Dec 06 06:43:26 crc kubenswrapper[4823]: I1206 06:43:26.025038 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-zv8r9" Dec 06 06:43:26 crc kubenswrapper[4823]: I1206 06:43:26.036572 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 06 06:43:26 crc kubenswrapper[4823]: I1206 06:43:26.053528 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-zv8r9"] Dec 06 06:43:26 crc kubenswrapper[4823]: I1206 06:43:26.172008 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/89a1992b-4562-4786-8e44-c95f760d1205-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-zv8r9\" (UID: \"89a1992b-4562-4786-8e44-c95f760d1205\") " pod="openstack/ovn-controller-metrics-zv8r9" Dec 06 06:43:26 crc kubenswrapper[4823]: I1206 06:43:26.172064 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/89a1992b-4562-4786-8e44-c95f760d1205-ovn-rundir\") pod \"ovn-controller-metrics-zv8r9\" (UID: \"89a1992b-4562-4786-8e44-c95f760d1205\") " pod="openstack/ovn-controller-metrics-zv8r9" Dec 06 06:43:26 crc kubenswrapper[4823]: I1206 06:43:26.172110 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/89a1992b-4562-4786-8e44-c95f760d1205-ovs-rundir\") pod \"ovn-controller-metrics-zv8r9\" (UID: \"89a1992b-4562-4786-8e44-c95f760d1205\") " pod="openstack/ovn-controller-metrics-zv8r9" Dec 06 06:43:26 crc kubenswrapper[4823]: I1206 06:43:26.172130 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89a1992b-4562-4786-8e44-c95f760d1205-config\") pod \"ovn-controller-metrics-zv8r9\" (UID: \"89a1992b-4562-4786-8e44-c95f760d1205\") " pod="openstack/ovn-controller-metrics-zv8r9" Dec 06 06:43:26 crc kubenswrapper[4823]: I1206 06:43:26.172156 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjn7j\" (UniqueName: \"kubernetes.io/projected/89a1992b-4562-4786-8e44-c95f760d1205-kube-api-access-zjn7j\") pod \"ovn-controller-metrics-zv8r9\" (UID: \"89a1992b-4562-4786-8e44-c95f760d1205\") " pod="openstack/ovn-controller-metrics-zv8r9" Dec 06 06:43:26 crc kubenswrapper[4823]: I1206 06:43:26.172210 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a1992b-4562-4786-8e44-c95f760d1205-combined-ca-bundle\") pod \"ovn-controller-metrics-zv8r9\" (UID: \"89a1992b-4562-4786-8e44-c95f760d1205\") " pod="openstack/ovn-controller-metrics-zv8r9" Dec 06 06:43:26 crc kubenswrapper[4823]: I1206 06:43:26.281878 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/89a1992b-4562-4786-8e44-c95f760d1205-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-zv8r9\" (UID: \"89a1992b-4562-4786-8e44-c95f760d1205\") " pod="openstack/ovn-controller-metrics-zv8r9" Dec 06 06:43:26 crc kubenswrapper[4823]: I1206 06:43:26.281945 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/89a1992b-4562-4786-8e44-c95f760d1205-ovn-rundir\") pod \"ovn-controller-metrics-zv8r9\" (UID: \"89a1992b-4562-4786-8e44-c95f760d1205\") " pod="openstack/ovn-controller-metrics-zv8r9" Dec 06 06:43:26 crc kubenswrapper[4823]: I1206 06:43:26.282000 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/89a1992b-4562-4786-8e44-c95f760d1205-ovs-rundir\") pod \"ovn-controller-metrics-zv8r9\" (UID: \"89a1992b-4562-4786-8e44-c95f760d1205\") " pod="openstack/ovn-controller-metrics-zv8r9" Dec 06 06:43:26 crc kubenswrapper[4823]: I1206 06:43:26.282020 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89a1992b-4562-4786-8e44-c95f760d1205-config\") pod \"ovn-controller-metrics-zv8r9\" (UID: \"89a1992b-4562-4786-8e44-c95f760d1205\") " pod="openstack/ovn-controller-metrics-zv8r9" Dec 06 06:43:26 crc kubenswrapper[4823]: I1206 06:43:26.282055 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjn7j\" (UniqueName: \"kubernetes.io/projected/89a1992b-4562-4786-8e44-c95f760d1205-kube-api-access-zjn7j\") pod \"ovn-controller-metrics-zv8r9\" (UID: \"89a1992b-4562-4786-8e44-c95f760d1205\") " pod="openstack/ovn-controller-metrics-zv8r9" Dec 06 06:43:26 crc kubenswrapper[4823]: I1206 06:43:26.282160 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a1992b-4562-4786-8e44-c95f760d1205-combined-ca-bundle\") pod \"ovn-controller-metrics-zv8r9\" (UID: \"89a1992b-4562-4786-8e44-c95f760d1205\") " pod="openstack/ovn-controller-metrics-zv8r9" Dec 06 06:43:26 crc kubenswrapper[4823]: I1206 06:43:26.283105 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/89a1992b-4562-4786-8e44-c95f760d1205-ovn-rundir\") pod \"ovn-controller-metrics-zv8r9\" (UID: \"89a1992b-4562-4786-8e44-c95f760d1205\") " pod="openstack/ovn-controller-metrics-zv8r9" Dec 06 06:43:26 crc kubenswrapper[4823]: I1206 06:43:26.283538 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/89a1992b-4562-4786-8e44-c95f760d1205-ovs-rundir\") pod \"ovn-controller-metrics-zv8r9\" (UID: \"89a1992b-4562-4786-8e44-c95f760d1205\") " pod="openstack/ovn-controller-metrics-zv8r9" Dec 06 06:43:26 crc kubenswrapper[4823]: I1206 06:43:26.294028 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89a1992b-4562-4786-8e44-c95f760d1205-config\") pod \"ovn-controller-metrics-zv8r9\" (UID: \"89a1992b-4562-4786-8e44-c95f760d1205\") " pod="openstack/ovn-controller-metrics-zv8r9" Dec 06 06:43:26 crc kubenswrapper[4823]: I1206 06:43:26.309160 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a1992b-4562-4786-8e44-c95f760d1205-combined-ca-bundle\") pod \"ovn-controller-metrics-zv8r9\" (UID: \"89a1992b-4562-4786-8e44-c95f760d1205\") " pod="openstack/ovn-controller-metrics-zv8r9" Dec 06 06:43:26 crc kubenswrapper[4823]: I1206 06:43:26.310481 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/89a1992b-4562-4786-8e44-c95f760d1205-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-zv8r9\" (UID: \"89a1992b-4562-4786-8e44-c95f760d1205\") " pod="openstack/ovn-controller-metrics-zv8r9" Dec 06 06:43:26 crc kubenswrapper[4823]: I1206 06:43:26.348262 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjn7j\" (UniqueName: \"kubernetes.io/projected/89a1992b-4562-4786-8e44-c95f760d1205-kube-api-access-zjn7j\") pod \"ovn-controller-metrics-zv8r9\" (UID: \"89a1992b-4562-4786-8e44-c95f760d1205\") " pod="openstack/ovn-controller-metrics-zv8r9" Dec 06 06:43:26 crc kubenswrapper[4823]: I1206 06:43:26.374051 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-zv8r9" Dec 06 06:43:26 crc kubenswrapper[4823]: I1206 06:43:26.399806 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-s2c88" event={"ID":"5dfb6e3c-4b92-4e55-9c69-679dc2326717","Type":"ContainerStarted","Data":"931995e4d6630fa69542302683383f0ff6e711374e7436e9fcc4529e33b39eb5"} Dec 06 06:43:26 crc kubenswrapper[4823]: I1206 06:43:26.794759 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 06 06:43:26 crc kubenswrapper[4823]: W1206 06:43:26.850009 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0db13557_99bd_4223_a8f1_53de273f6ba3.slice/crio-b82596921a205cc6f5b1323a0c6ba232096fcc17935366d6cb46f98e811aeea9 WatchSource:0}: Error finding container b82596921a205cc6f5b1323a0c6ba232096fcc17935366d6cb46f98e811aeea9: Status 404 returned error can't find the container with id b82596921a205cc6f5b1323a0c6ba232096fcc17935366d6cb46f98e811aeea9 Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.376051 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.378868 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.383633 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-w7q9x" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.383844 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.384190 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.385533 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.395521 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.407440 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-zv8r9"] Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.430740 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0db13557-99bd-4223-a8f1-53de273f6ba3","Type":"ContainerStarted","Data":"b82596921a205cc6f5b1323a0c6ba232096fcc17935366d6cb46f98e811aeea9"} Dec 06 06:43:27 crc kubenswrapper[4823]: W1206 06:43:27.437579 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89a1992b_4562_4786_8e44_c95f760d1205.slice/crio-d8a630ad6c627eff46252a7f0da02215b23b6dd9c50f312b7ef1f92766a23860 WatchSource:0}: Error finding container d8a630ad6c627eff46252a7f0da02215b23b6dd9c50f312b7ef1f92766a23860: Status 404 returned error can't find the container with id d8a630ad6c627eff46252a7f0da02215b23b6dd9c50f312b7ef1f92766a23860 Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.570579 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e7b2f78-e48e-40c3-a0e9-d1b78608da3e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e\") " pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.570899 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7b2f78-e48e-40c3-a0e9-d1b78608da3e-config\") pod \"ovsdbserver-nb-0\" (UID: \"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e\") " pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.570978 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e7b2f78-e48e-40c3-a0e9-d1b78608da3e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e\") " pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.571023 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e7b2f78-e48e-40c3-a0e9-d1b78608da3e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e\") " pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.571038 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e7b2f78-e48e-40c3-a0e9-d1b78608da3e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e\") " pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.571060 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e7b2f78-e48e-40c3-a0e9-d1b78608da3e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e\") " pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.571104 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e\") " pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.571126 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt7pg\" (UniqueName: \"kubernetes.io/projected/1e7b2f78-e48e-40c3-a0e9-d1b78608da3e-kube-api-access-dt7pg\") pod \"ovsdbserver-nb-0\" (UID: \"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e\") " pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.672513 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e7b2f78-e48e-40c3-a0e9-d1b78608da3e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e\") " pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.672564 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e7b2f78-e48e-40c3-a0e9-d1b78608da3e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e\") " pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.672599 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e7b2f78-e48e-40c3-a0e9-d1b78608da3e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e\") " pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.672635 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e\") " pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.672673 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt7pg\" (UniqueName: \"kubernetes.io/projected/1e7b2f78-e48e-40c3-a0e9-d1b78608da3e-kube-api-access-dt7pg\") pod \"ovsdbserver-nb-0\" (UID: \"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e\") " pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.672768 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e7b2f78-e48e-40c3-a0e9-d1b78608da3e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e\") " pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.672803 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7b2f78-e48e-40c3-a0e9-d1b78608da3e-config\") pod \"ovsdbserver-nb-0\" (UID: \"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e\") " pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.672875 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e7b2f78-e48e-40c3-a0e9-d1b78608da3e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e\") " pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.673894 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e7b2f78-e48e-40c3-a0e9-d1b78608da3e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e\") " pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.674438 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7b2f78-e48e-40c3-a0e9-d1b78608da3e-config\") pod \"ovsdbserver-nb-0\" (UID: \"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e\") " pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.675157 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e7b2f78-e48e-40c3-a0e9-d1b78608da3e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e\") " pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.675387 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.681795 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e7b2f78-e48e-40c3-a0e9-d1b78608da3e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e\") " pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.687704 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e7b2f78-e48e-40c3-a0e9-d1b78608da3e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e\") " pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.693621 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt7pg\" (UniqueName: \"kubernetes.io/projected/1e7b2f78-e48e-40c3-a0e9-d1b78608da3e-kube-api-access-dt7pg\") pod \"ovsdbserver-nb-0\" (UID: \"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e\") " pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.704781 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e7b2f78-e48e-40c3-a0e9-d1b78608da3e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e\") " pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.724131 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e\") " pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:27 crc kubenswrapper[4823]: I1206 06:43:27.753248 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 06 06:43:28 crc kubenswrapper[4823]: I1206 06:43:28.495422 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-zv8r9" event={"ID":"89a1992b-4562-4786-8e44-c95f760d1205","Type":"ContainerStarted","Data":"d8a630ad6c627eff46252a7f0da02215b23b6dd9c50f312b7ef1f92766a23860"} Dec 06 06:43:45 crc kubenswrapper[4823]: E1206 06:43:45.936697 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 06 06:43:45 crc kubenswrapper[4823]: E1206 06:43:45.937644 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 06 06:43:45 crc kubenswrapper[4823]: E1206 06:43:45.938020 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5z9lr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(de2d0c7c-d378-4a38-956d-56a576de5c21): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Dec 06 06:43:45 crc kubenswrapper[4823]: E1206 06:43:45.940177 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="de2d0c7c-d378-4a38-956d-56a576de5c21" Dec 06 06:43:45 crc kubenswrapper[4823]: E1206 06:43:45.945281 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62" Dec 06 06:43:45 crc kubenswrapper[4823]: E1206 06:43:45.945515 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init-config-reloader,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62,Command:[/bin/prometheus-config-reloader],Args:[--watch-interval=0 --listen-address=:8081 --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:reloader-init,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:SHARD,Value:0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/prometheus/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:false,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-phjxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(d40b985e-9817-453a-8a4b-72d7eadf4683): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 06:43:45 crc kubenswrapper[4823]: E1206 06:43:45.947088 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="d40b985e-9817-453a-8a4b-72d7eadf4683" Dec 06 06:43:46 crc kubenswrapper[4823]: I1206 06:43:46.112382 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 06 06:43:46 crc kubenswrapper[4823]: E1206 06:43:46.668218 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="d40b985e-9817-453a-8a4b-72d7eadf4683" Dec 06 06:43:46 crc kubenswrapper[4823]: E1206 06:43:46.668470 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="de2d0c7c-d378-4a38-956d-56a576de5c21" Dec 06 06:43:49 crc kubenswrapper[4823]: E1206 06:43:49.556580 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Dec 06 06:43:49 crc kubenswrapper[4823]: E1206 06:43:49.557004 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Dec 06 06:43:49 crc kubenswrapper[4823]: E1206 06:43:49.557201 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:38.102.83.174:5001/podified-master-centos10/openstack-mariadb:watcher_latest,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5nmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(9da6c764-c7e5-4b0b-9d9f-8a5904f84187): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:43:49 crc kubenswrapper[4823]: E1206 06:43:49.558763 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="9da6c764-c7e5-4b0b-9d9f-8a5904f84187" Dec 06 06:43:49 crc kubenswrapper[4823]: E1206 06:43:49.690920 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.174:5001/podified-master-centos10/openstack-mariadb:watcher_latest\\\"\"" pod="openstack/openstack-galera-0" podUID="9da6c764-c7e5-4b0b-9d9f-8a5904f84187" Dec 06 06:44:00 crc kubenswrapper[4823]: E1206 06:44:00.085174 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-ovn-base:watcher_latest" Dec 06 06:44:00 crc kubenswrapper[4823]: E1206 06:44:00.085775 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-ovn-base:watcher_latest" Dec 06 06:44:00 crc kubenswrapper[4823]: E1206 06:44:00.085934 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:38.102.83.174:5001/podified-master-centos10/openstack-ovn-base:watcher_latest,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n599hfbh659h99h7h57ch678hb6h557h657h557h5cfh66dhf8hcch586h56fhc4h7fh696h85hfh564h6fh585hfhf4hddh65dh88h9h58cq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5h97g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-s2c88_openstack(5dfb6e3c-4b92-4e55-9c69-679dc2326717): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:44:00 crc kubenswrapper[4823]: E1206 06:44:00.087317 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-s2c88" podUID="5dfb6e3c-4b92-4e55-9c69-679dc2326717" Dec 06 06:44:00 crc kubenswrapper[4823]: E1206 06:44:00.779004 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.174:5001/podified-master-centos10/openstack-ovn-base:watcher_latest\\\"\"" pod="openstack/ovn-controller-ovs-s2c88" podUID="5dfb6e3c-4b92-4e55-9c69-679dc2326717" Dec 06 06:44:01 crc kubenswrapper[4823]: E1206 06:44:01.211700 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Dec 06 06:44:01 crc kubenswrapper[4823]: E1206 06:44:01.211765 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Dec 06 06:44:01 crc kubenswrapper[4823]: E1206 06:44:01.211930 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.174:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fvrw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(807fbfb1-90fe-4325-a0ac-09b309c77172): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:44:01 crc kubenswrapper[4823]: E1206 06:44:01.213144 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="807fbfb1-90fe-4325-a0ac-09b309c77172" Dec 06 06:44:01 crc kubenswrapper[4823]: E1206 06:44:01.223382 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Dec 06 06:44:01 crc kubenswrapper[4823]: E1206 06:44:01.223463 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Dec 06 06:44:01 crc kubenswrapper[4823]: E1206 06:44:01.223618 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.174:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s4q4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(3c7ecce4-d359-486f-9386-057202b69efd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:44:01 crc kubenswrapper[4823]: E1206 06:44:01.225011 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="3c7ecce4-d359-486f-9386-057202b69efd" Dec 06 06:44:01 crc kubenswrapper[4823]: W1206 06:44:01.782527 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e7b2f78_e48e_40c3_a0e9_d1b78608da3e.slice/crio-bb33da0b52c9948952ce2cd4a2fa032c230adc9a7892cda429647df4e6c1139a WatchSource:0}: Error finding container bb33da0b52c9948952ce2cd4a2fa032c230adc9a7892cda429647df4e6c1139a: Status 404 returned error can't find the container with id bb33da0b52c9948952ce2cd4a2fa032c230adc9a7892cda429647df4e6c1139a Dec 06 06:44:01 crc kubenswrapper[4823]: E1206 06:44:01.959269 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-memcached:watcher_latest" Dec 06 06:44:01 crc kubenswrapper[4823]: E1206 06:44:01.959351 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-memcached:watcher_latest" Dec 06 06:44:01 crc kubenswrapper[4823]: E1206 06:44:01.959605 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:38.102.83.174:5001/podified-master-centos10/openstack-memcached:watcher_latest,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n64ch65ch97h5fbh5c5hfbh676h575h59h5f5h665h5c4hbdh66h6h554h5d4h677h6bhc7h65ch84h5ch7dh79h56bh5fbh7dhbdh8bh98h65fq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bgcd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(06a30022-1a67-4812-941e-3118f3767d35): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:44:01 crc kubenswrapper[4823]: E1206 06:44:01.960940 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="06a30022-1a67-4812-941e-3118f3767d35" Dec 06 06:44:02 crc kubenswrapper[4823]: E1206 06:44:02.021542 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.174:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-server-0" podUID="3c7ecce4-d359-486f-9386-057202b69efd" Dec 06 06:44:02 crc kubenswrapper[4823]: E1206 06:44:02.021711 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.174:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="807fbfb1-90fe-4325-a0ac-09b309c77172" Dec 06 06:44:02 crc kubenswrapper[4823]: E1206 06:44:02.051628 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Dec 06 06:44:02 crc kubenswrapper[4823]: E1206 06:44:02.051699 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Dec 06 06:44:02 crc kubenswrapper[4823]: E1206 06:44:02.051877 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.174:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l65b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-notifications-server-0_openstack(b6649430-bcca-4949-82d4-f15ac31f36e1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:44:02 crc kubenswrapper[4823]: E1206 06:44:02.053052 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-notifications-server-0" podUID="b6649430-bcca-4949-82d4-f15ac31f36e1" Dec 06 06:44:02 crc kubenswrapper[4823]: I1206 06:44:02.817179 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e","Type":"ContainerStarted","Data":"bb33da0b52c9948952ce2cd4a2fa032c230adc9a7892cda429647df4e6c1139a"} Dec 06 06:44:02 crc kubenswrapper[4823]: E1206 06:44:02.820540 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.174:5001/podified-master-centos10/openstack-memcached:watcher_latest\\\"\"" pod="openstack/memcached-0" podUID="06a30022-1a67-4812-941e-3118f3767d35" Dec 06 06:44:02 crc kubenswrapper[4823]: E1206 06:44:02.821749 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.174:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-notifications-server-0" podUID="b6649430-bcca-4949-82d4-f15ac31f36e1" Dec 06 06:44:03 crc kubenswrapper[4823]: E1206 06:44:03.396166 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest" Dec 06 06:44:03 crc kubenswrapper[4823]: E1206 06:44:03.396241 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest" Dec 06 06:44:03 crc kubenswrapper[4823]: E1206 06:44:03.396408 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:38.102.83.174:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n79h5f7hch694h545h75h89h98h599h5c8h68hfh6ch76h7ch644h68h5cfh8fh8fh568h64ch6h5c7h78hf8h5ffh576h68ch574hcfh95q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7rfk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(0db13557-99bd-4223-a8f1-53de273f6ba3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:44:04 crc kubenswrapper[4823]: E1206 06:44:04.045963 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Dec 06 06:44:04 crc kubenswrapper[4823]: E1206 06:44:04.046013 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Dec 06 06:44:04 crc kubenswrapper[4823]: E1206 06:44:04.046160 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.174:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgscc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7657cbb67-9t7pq_openstack(f319773b-70ba-4e72-ad29-46a902567c5a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:44:04 crc kubenswrapper[4823]: E1206 06:44:04.047369 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7657cbb67-9t7pq" podUID="f319773b-70ba-4e72-ad29-46a902567c5a" Dec 06 06:44:04 crc kubenswrapper[4823]: E1206 06:44:04.332606 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest" Dec 06 06:44:04 crc kubenswrapper[4823]: E1206 06:44:04.332676 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest" Dec 06 06:44:04 crc kubenswrapper[4823]: E1206 06:44:04.332812 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:38.102.83.174:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n599hfbh659h99h7h57ch678hb6h557h657h557h5cfh66dhf8hcch586h56fhc4h7fh696h85hfh564h6fh585hfhf4hddh65dh88h9h58cq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vtnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-94t86_openstack(afe6c323-7053-4b9e-af90-27bb99d99ae3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:44:04 crc kubenswrapper[4823]: E1206 06:44:04.334814 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-94t86" podUID="afe6c323-7053-4b9e-af90-27bb99d99ae3" Dec 06 06:44:04 crc kubenswrapper[4823]: E1206 06:44:04.374468 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Dec 06 06:44:04 crc kubenswrapper[4823]: E1206 06:44:04.374537 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Dec 06 06:44:04 crc kubenswrapper[4823]: E1206 06:44:04.374706 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.174:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpwlt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-b588dc5c-k5ddq_openstack(bfb50f6e-02f9-4b9d-b79a-c73439e842c5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:44:04 crc kubenswrapper[4823]: E1206 06:44:04.375924 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-b588dc5c-k5ddq" podUID="bfb50f6e-02f9-4b9d-b79a-c73439e842c5" Dec 06 06:44:04 crc kubenswrapper[4823]: E1206 06:44:04.465126 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Dec 06 06:44:04 crc kubenswrapper[4823]: E1206 06:44:04.465204 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Dec 06 06:44:04 crc kubenswrapper[4823]: E1206 06:44:04.465336 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.174:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dx6b7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-868658cdc7-65l7h_openstack(659c7501-73e3-4976-a80e-4874a005a2f3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:44:04 crc kubenswrapper[4823]: E1206 06:44:04.466778 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-868658cdc7-65l7h" podUID="659c7501-73e3-4976-a80e-4874a005a2f3" Dec 06 06:44:04 crc kubenswrapper[4823]: E1206 06:44:04.800260 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Dec 06 06:44:04 crc kubenswrapper[4823]: E1206 06:44:04.800324 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Dec 06 06:44:04 crc kubenswrapper[4823]: E1206 06:44:04.800466 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.174:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c7h56dh5cfh8bh54fhbbhf4h5b9hdch67fhd7h55fh55fh6ch9h548h54ch665h647h6h8fhd6h5dfh5cdh58bh577h66fh695h5fbh55h77h5fcq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-stplq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5d65f7695-zgtjz_openstack(8267e31c-32b9-4640-90f9-9078920f64d5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:44:04 crc kubenswrapper[4823]: E1206 06:44:04.801692 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5d65f7695-zgtjz" podUID="8267e31c-32b9-4640-90f9-9078920f64d5" Dec 06 06:44:04 crc kubenswrapper[4823]: E1206 06:44:04.831054 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.174:5001/podified-master-centos10/openstack-neutron-server:watcher_latest\\\"\"" pod="openstack/dnsmasq-dns-b588dc5c-k5ddq" podUID="bfb50f6e-02f9-4b9d-b79a-c73439e842c5" Dec 06 06:44:04 crc kubenswrapper[4823]: E1206 06:44:04.831253 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.174:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest\\\"\"" pod="openstack/ovn-controller-94t86" podUID="afe6c323-7053-4b9e-af90-27bb99d99ae3" Dec 06 06:44:04 crc kubenswrapper[4823]: E1206 06:44:04.832470 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.174:5001/podified-master-centos10/openstack-neutron-server:watcher_latest\\\"\"" pod="openstack/dnsmasq-dns-5d65f7695-zgtjz" podUID="8267e31c-32b9-4640-90f9-9078920f64d5" Dec 06 06:44:05 crc kubenswrapper[4823]: E1206 06:44:05.085976 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Dec 06 06:44:05 crc kubenswrapper[4823]: E1206 06:44:05.086039 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Dec 06 06:44:05 crc kubenswrapper[4823]: E1206 06:44:05.086212 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.174:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sptbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6b55f8b79c-gfvlq_openstack(97ea53a4-6c8c-452a-86d8-359d204ce8bc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:44:05 crc kubenswrapper[4823]: E1206 06:44:05.088126 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6b55f8b79c-gfvlq" podUID="97ea53a4-6c8c-452a-86d8-359d204ce8bc" Dec 06 06:44:05 crc kubenswrapper[4823]: E1206 06:44:05.555498 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Dec 06 06:44:05 crc kubenswrapper[4823]: E1206 06:44:05.555891 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n64h55dh566h5d4h67h549h9dhffh5bbhf6h584h68dh56dh59chd4h695h9ch5bch5b8h684hc5h87h67ch589h65ch5dch95hd6h597h589hcch554q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovs-rundir,ReadOnly:true,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-rundir,ReadOnly:true,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zjn7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-metrics-zv8r9_openstack(89a1992b-4562-4786-8e44-c95f760d1205): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:44:05 crc kubenswrapper[4823]: E1206 06:44:05.557067 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-metrics-zv8r9" podUID="89a1992b-4562-4786-8e44-c95f760d1205" Dec 06 06:44:05 crc kubenswrapper[4823]: I1206 06:44:05.627566 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-868658cdc7-65l7h" Dec 06 06:44:05 crc kubenswrapper[4823]: I1206 06:44:05.738021 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/659c7501-73e3-4976-a80e-4874a005a2f3-dns-svc\") pod \"659c7501-73e3-4976-a80e-4874a005a2f3\" (UID: \"659c7501-73e3-4976-a80e-4874a005a2f3\") " Dec 06 06:44:05 crc kubenswrapper[4823]: I1206 06:44:05.738112 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/659c7501-73e3-4976-a80e-4874a005a2f3-config\") pod \"659c7501-73e3-4976-a80e-4874a005a2f3\" (UID: \"659c7501-73e3-4976-a80e-4874a005a2f3\") " Dec 06 06:44:05 crc kubenswrapper[4823]: I1206 06:44:05.738169 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx6b7\" (UniqueName: \"kubernetes.io/projected/659c7501-73e3-4976-a80e-4874a005a2f3-kube-api-access-dx6b7\") pod \"659c7501-73e3-4976-a80e-4874a005a2f3\" (UID: \"659c7501-73e3-4976-a80e-4874a005a2f3\") " Dec 06 06:44:05 crc kubenswrapper[4823]: I1206 06:44:05.739098 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/659c7501-73e3-4976-a80e-4874a005a2f3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "659c7501-73e3-4976-a80e-4874a005a2f3" (UID: "659c7501-73e3-4976-a80e-4874a005a2f3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:05 crc kubenswrapper[4823]: I1206 06:44:05.739338 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/659c7501-73e3-4976-a80e-4874a005a2f3-config" (OuterVolumeSpecName: "config") pod "659c7501-73e3-4976-a80e-4874a005a2f3" (UID: "659c7501-73e3-4976-a80e-4874a005a2f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:05 crc kubenswrapper[4823]: I1206 06:44:05.743812 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/659c7501-73e3-4976-a80e-4874a005a2f3-kube-api-access-dx6b7" (OuterVolumeSpecName: "kube-api-access-dx6b7") pod "659c7501-73e3-4976-a80e-4874a005a2f3" (UID: "659c7501-73e3-4976-a80e-4874a005a2f3"). InnerVolumeSpecName "kube-api-access-dx6b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:44:05 crc kubenswrapper[4823]: I1206 06:44:05.786778 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7657cbb67-9t7pq" Dec 06 06:44:05 crc kubenswrapper[4823]: I1206 06:44:05.840490 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/659c7501-73e3-4976-a80e-4874a005a2f3-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:05 crc kubenswrapper[4823]: I1206 06:44:05.840527 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/659c7501-73e3-4976-a80e-4874a005a2f3-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:05 crc kubenswrapper[4823]: I1206 06:44:05.840543 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx6b7\" (UniqueName: \"kubernetes.io/projected/659c7501-73e3-4976-a80e-4874a005a2f3-kube-api-access-dx6b7\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:05 crc kubenswrapper[4823]: I1206 06:44:05.841642 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7657cbb67-9t7pq" event={"ID":"f319773b-70ba-4e72-ad29-46a902567c5a","Type":"ContainerDied","Data":"01d50d769688992e4e7b68a08ffce457b9d14f728883c5ebb5f0e732f3ad6794"} Dec 06 06:44:05 crc kubenswrapper[4823]: I1206 06:44:05.841783 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7657cbb67-9t7pq" Dec 06 06:44:05 crc kubenswrapper[4823]: I1206 06:44:05.845378 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-868658cdc7-65l7h" event={"ID":"659c7501-73e3-4976-a80e-4874a005a2f3","Type":"ContainerDied","Data":"6be2705bb75852b640b50e4f6e06bb703a5833f6e4c47a0b85753584784c9da8"} Dec 06 06:44:05 crc kubenswrapper[4823]: I1206 06:44:05.845952 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-868658cdc7-65l7h" Dec 06 06:44:05 crc kubenswrapper[4823]: I1206 06:44:05.942495 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f319773b-70ba-4e72-ad29-46a902567c5a-dns-svc\") pod \"f319773b-70ba-4e72-ad29-46a902567c5a\" (UID: \"f319773b-70ba-4e72-ad29-46a902567c5a\") " Dec 06 06:44:05 crc kubenswrapper[4823]: I1206 06:44:05.942798 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgscc\" (UniqueName: \"kubernetes.io/projected/f319773b-70ba-4e72-ad29-46a902567c5a-kube-api-access-vgscc\") pod \"f319773b-70ba-4e72-ad29-46a902567c5a\" (UID: \"f319773b-70ba-4e72-ad29-46a902567c5a\") " Dec 06 06:44:05 crc kubenswrapper[4823]: I1206 06:44:05.942838 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f319773b-70ba-4e72-ad29-46a902567c5a-config\") pod \"f319773b-70ba-4e72-ad29-46a902567c5a\" (UID: \"f319773b-70ba-4e72-ad29-46a902567c5a\") " Dec 06 06:44:05 crc kubenswrapper[4823]: I1206 06:44:05.943689 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f319773b-70ba-4e72-ad29-46a902567c5a-config" (OuterVolumeSpecName: "config") pod "f319773b-70ba-4e72-ad29-46a902567c5a" (UID: "f319773b-70ba-4e72-ad29-46a902567c5a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:05 crc kubenswrapper[4823]: I1206 06:44:05.943949 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f319773b-70ba-4e72-ad29-46a902567c5a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f319773b-70ba-4e72-ad29-46a902567c5a" (UID: "f319773b-70ba-4e72-ad29-46a902567c5a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:05 crc kubenswrapper[4823]: I1206 06:44:05.945059 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f319773b-70ba-4e72-ad29-46a902567c5a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:05 crc kubenswrapper[4823]: I1206 06:44:05.945089 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f319773b-70ba-4e72-ad29-46a902567c5a-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:05 crc kubenswrapper[4823]: I1206 06:44:05.946331 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-868658cdc7-65l7h"] Dec 06 06:44:05 crc kubenswrapper[4823]: I1206 06:44:05.946877 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f319773b-70ba-4e72-ad29-46a902567c5a-kube-api-access-vgscc" (OuterVolumeSpecName: "kube-api-access-vgscc") pod "f319773b-70ba-4e72-ad29-46a902567c5a" (UID: "f319773b-70ba-4e72-ad29-46a902567c5a"). InnerVolumeSpecName "kube-api-access-vgscc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:44:05 crc kubenswrapper[4823]: I1206 06:44:05.955376 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-868658cdc7-65l7h"] Dec 06 06:44:06 crc kubenswrapper[4823]: I1206 06:44:06.046298 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgscc\" (UniqueName: \"kubernetes.io/projected/f319773b-70ba-4e72-ad29-46a902567c5a-kube-api-access-vgscc\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:06 crc kubenswrapper[4823]: I1206 06:44:06.208781 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7657cbb67-9t7pq"] Dec 06 06:44:06 crc kubenswrapper[4823]: I1206 06:44:06.216076 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7657cbb67-9t7pq"] Dec 06 06:44:06 crc kubenswrapper[4823]: I1206 06:44:06.235687 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b55f8b79c-gfvlq" Dec 06 06:44:06 crc kubenswrapper[4823]: I1206 06:44:06.352060 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97ea53a4-6c8c-452a-86d8-359d204ce8bc-config\") pod \"97ea53a4-6c8c-452a-86d8-359d204ce8bc\" (UID: \"97ea53a4-6c8c-452a-86d8-359d204ce8bc\") " Dec 06 06:44:06 crc kubenswrapper[4823]: I1206 06:44:06.352683 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sptbj\" (UniqueName: \"kubernetes.io/projected/97ea53a4-6c8c-452a-86d8-359d204ce8bc-kube-api-access-sptbj\") pod \"97ea53a4-6c8c-452a-86d8-359d204ce8bc\" (UID: \"97ea53a4-6c8c-452a-86d8-359d204ce8bc\") " Dec 06 06:44:06 crc kubenswrapper[4823]: I1206 06:44:06.353029 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97ea53a4-6c8c-452a-86d8-359d204ce8bc-config" (OuterVolumeSpecName: "config") pod "97ea53a4-6c8c-452a-86d8-359d204ce8bc" (UID: "97ea53a4-6c8c-452a-86d8-359d204ce8bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:06 crc kubenswrapper[4823]: I1206 06:44:06.353302 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97ea53a4-6c8c-452a-86d8-359d204ce8bc-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:06 crc kubenswrapper[4823]: I1206 06:44:06.357640 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97ea53a4-6c8c-452a-86d8-359d204ce8bc-kube-api-access-sptbj" (OuterVolumeSpecName: "kube-api-access-sptbj") pod "97ea53a4-6c8c-452a-86d8-359d204ce8bc" (UID: "97ea53a4-6c8c-452a-86d8-359d204ce8bc"). InnerVolumeSpecName "kube-api-access-sptbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:44:06 crc kubenswrapper[4823]: I1206 06:44:06.454989 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sptbj\" (UniqueName: \"kubernetes.io/projected/97ea53a4-6c8c-452a-86d8-359d204ce8bc-kube-api-access-sptbj\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:06 crc kubenswrapper[4823]: I1206 06:44:06.857366 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8e707833-acd6-49f7-91f7-a3ddd3a40119","Type":"ContainerStarted","Data":"af17a10c5565a9d2975eb472d3235cdaf713c2a497de2f75072f9961bf84f4b9"} Dec 06 06:44:06 crc kubenswrapper[4823]: I1206 06:44:06.859872 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b55f8b79c-gfvlq" event={"ID":"97ea53a4-6c8c-452a-86d8-359d204ce8bc","Type":"ContainerDied","Data":"52a462dd30a046814558adafc4767e95599b442c1d04297b3742bd058e71d14e"} Dec 06 06:44:06 crc kubenswrapper[4823]: I1206 06:44:06.859949 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b55f8b79c-gfvlq" Dec 06 06:44:06 crc kubenswrapper[4823]: I1206 06:44:06.932597 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b55f8b79c-gfvlq"] Dec 06 06:44:06 crc kubenswrapper[4823]: I1206 06:44:06.942400 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b55f8b79c-gfvlq"] Dec 06 06:44:07 crc kubenswrapper[4823]: I1206 06:44:07.153812 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="659c7501-73e3-4976-a80e-4874a005a2f3" path="/var/lib/kubelet/pods/659c7501-73e3-4976-a80e-4874a005a2f3/volumes" Dec 06 06:44:07 crc kubenswrapper[4823]: I1206 06:44:07.154248 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97ea53a4-6c8c-452a-86d8-359d204ce8bc" path="/var/lib/kubelet/pods/97ea53a4-6c8c-452a-86d8-359d204ce8bc/volumes" Dec 06 06:44:07 crc kubenswrapper[4823]: I1206 06:44:07.154731 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f319773b-70ba-4e72-ad29-46a902567c5a" path="/var/lib/kubelet/pods/f319773b-70ba-4e72-ad29-46a902567c5a/volumes" Dec 06 06:44:07 crc kubenswrapper[4823]: E1206 06:44:07.649377 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="0db13557-99bd-4223-a8f1-53de273f6ba3" Dec 06 06:44:07 crc kubenswrapper[4823]: I1206 06:44:07.869642 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0db13557-99bd-4223-a8f1-53de273f6ba3","Type":"ContainerStarted","Data":"258f0a6abca097f1a199f9c8edd16e85d8b7dccfe40ca6af86d9129363a7db1c"} Dec 06 06:44:07 crc kubenswrapper[4823]: E1206 06:44:07.873403 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.174:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="0db13557-99bd-4223-a8f1-53de273f6ba3" Dec 06 06:44:08 crc kubenswrapper[4823]: I1206 06:44:08.881571 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-zv8r9" event={"ID":"89a1992b-4562-4786-8e44-c95f760d1205","Type":"ContainerStarted","Data":"382724a3845ecd9e486e4b94c3ce26b68b99919d03b0b02c1805b1fb97c0d48e"} Dec 06 06:44:08 crc kubenswrapper[4823]: I1206 06:44:08.885590 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d40b985e-9817-453a-8a4b-72d7eadf4683","Type":"ContainerStarted","Data":"0c4937a472ff3438ca571a336857409074a41ef08374ed5c4915823254918c6d"} Dec 06 06:44:08 crc kubenswrapper[4823]: I1206 06:44:08.889712 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9da6c764-c7e5-4b0b-9d9f-8a5904f84187","Type":"ContainerStarted","Data":"b9b9fbca3e6400b27de82fd4143e586f7790dd1cc566af567a28744c055af69f"} Dec 06 06:44:08 crc kubenswrapper[4823]: E1206 06:44:08.890846 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.174:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="0db13557-99bd-4223-a8f1-53de273f6ba3" Dec 06 06:44:08 crc kubenswrapper[4823]: I1206 06:44:08.917369 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-zv8r9" podStartSLOduration=-9223371992.937435 podStartE2EDuration="43.917340447s" podCreationTimestamp="2025-12-06 06:43:25 +0000 UTC" firstStartedPulling="2025-12-06 06:43:27.450424462 +0000 UTC m=+1108.736176422" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:44:08.90399711 +0000 UTC m=+1150.189749070" watchObservedRunningTime="2025-12-06 06:44:08.917340447 +0000 UTC m=+1150.203092407" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.352019 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b588dc5c-k5ddq"] Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.396963 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86cf857789-g4nlb"] Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.399317 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86cf857789-g4nlb" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.405800 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.427910 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86cf857789-g4nlb"] Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.507561 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba0acb8-4751-4dca-bda8-a7a3549ddf1f-config\") pod \"dnsmasq-dns-86cf857789-g4nlb\" (UID: \"2ba0acb8-4751-4dca-bda8-a7a3549ddf1f\") " pod="openstack/dnsmasq-dns-86cf857789-g4nlb" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.507676 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ba0acb8-4751-4dca-bda8-a7a3549ddf1f-ovsdbserver-sb\") pod \"dnsmasq-dns-86cf857789-g4nlb\" (UID: \"2ba0acb8-4751-4dca-bda8-a7a3549ddf1f\") " pod="openstack/dnsmasq-dns-86cf857789-g4nlb" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.507749 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfktx\" (UniqueName: \"kubernetes.io/projected/2ba0acb8-4751-4dca-bda8-a7a3549ddf1f-kube-api-access-lfktx\") pod \"dnsmasq-dns-86cf857789-g4nlb\" (UID: \"2ba0acb8-4751-4dca-bda8-a7a3549ddf1f\") " pod="openstack/dnsmasq-dns-86cf857789-g4nlb" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.507845 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ba0acb8-4751-4dca-bda8-a7a3549ddf1f-dns-svc\") pod \"dnsmasq-dns-86cf857789-g4nlb\" (UID: \"2ba0acb8-4751-4dca-bda8-a7a3549ddf1f\") " pod="openstack/dnsmasq-dns-86cf857789-g4nlb" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.609108 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ba0acb8-4751-4dca-bda8-a7a3549ddf1f-dns-svc\") pod \"dnsmasq-dns-86cf857789-g4nlb\" (UID: \"2ba0acb8-4751-4dca-bda8-a7a3549ddf1f\") " pod="openstack/dnsmasq-dns-86cf857789-g4nlb" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.609506 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba0acb8-4751-4dca-bda8-a7a3549ddf1f-config\") pod \"dnsmasq-dns-86cf857789-g4nlb\" (UID: \"2ba0acb8-4751-4dca-bda8-a7a3549ddf1f\") " pod="openstack/dnsmasq-dns-86cf857789-g4nlb" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.609553 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ba0acb8-4751-4dca-bda8-a7a3549ddf1f-ovsdbserver-sb\") pod \"dnsmasq-dns-86cf857789-g4nlb\" (UID: \"2ba0acb8-4751-4dca-bda8-a7a3549ddf1f\") " pod="openstack/dnsmasq-dns-86cf857789-g4nlb" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.609604 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfktx\" (UniqueName: \"kubernetes.io/projected/2ba0acb8-4751-4dca-bda8-a7a3549ddf1f-kube-api-access-lfktx\") pod \"dnsmasq-dns-86cf857789-g4nlb\" (UID: \"2ba0acb8-4751-4dca-bda8-a7a3549ddf1f\") " pod="openstack/dnsmasq-dns-86cf857789-g4nlb" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.610691 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ba0acb8-4751-4dca-bda8-a7a3549ddf1f-dns-svc\") pod \"dnsmasq-dns-86cf857789-g4nlb\" (UID: \"2ba0acb8-4751-4dca-bda8-a7a3549ddf1f\") " pod="openstack/dnsmasq-dns-86cf857789-g4nlb" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.610783 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba0acb8-4751-4dca-bda8-a7a3549ddf1f-config\") pod \"dnsmasq-dns-86cf857789-g4nlb\" (UID: \"2ba0acb8-4751-4dca-bda8-a7a3549ddf1f\") " pod="openstack/dnsmasq-dns-86cf857789-g4nlb" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.611421 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ba0acb8-4751-4dca-bda8-a7a3549ddf1f-ovsdbserver-sb\") pod \"dnsmasq-dns-86cf857789-g4nlb\" (UID: \"2ba0acb8-4751-4dca-bda8-a7a3549ddf1f\") " pod="openstack/dnsmasq-dns-86cf857789-g4nlb" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.655809 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfktx\" (UniqueName: \"kubernetes.io/projected/2ba0acb8-4751-4dca-bda8-a7a3549ddf1f-kube-api-access-lfktx\") pod \"dnsmasq-dns-86cf857789-g4nlb\" (UID: \"2ba0acb8-4751-4dca-bda8-a7a3549ddf1f\") " pod="openstack/dnsmasq-dns-86cf857789-g4nlb" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.741621 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d65f7695-zgtjz"] Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.772353 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86cf857789-g4nlb" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.802620 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75f8fbf54c-4622b"] Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.804610 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.809500 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.813806 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d9a34b1-6dcf-4c85-8518-210aa4c49591-dns-svc\") pod \"dnsmasq-dns-75f8fbf54c-4622b\" (UID: \"7d9a34b1-6dcf-4c85-8518-210aa4c49591\") " pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.813852 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d9a34b1-6dcf-4c85-8518-210aa4c49591-ovsdbserver-nb\") pod \"dnsmasq-dns-75f8fbf54c-4622b\" (UID: \"7d9a34b1-6dcf-4c85-8518-210aa4c49591\") " pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.813889 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85jgq\" (UniqueName: \"kubernetes.io/projected/7d9a34b1-6dcf-4c85-8518-210aa4c49591-kube-api-access-85jgq\") pod \"dnsmasq-dns-75f8fbf54c-4622b\" (UID: \"7d9a34b1-6dcf-4c85-8518-210aa4c49591\") " pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.813974 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d9a34b1-6dcf-4c85-8518-210aa4c49591-config\") pod \"dnsmasq-dns-75f8fbf54c-4622b\" (UID: \"7d9a34b1-6dcf-4c85-8518-210aa4c49591\") " pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.813994 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d9a34b1-6dcf-4c85-8518-210aa4c49591-ovsdbserver-sb\") pod \"dnsmasq-dns-75f8fbf54c-4622b\" (UID: \"7d9a34b1-6dcf-4c85-8518-210aa4c49591\") " pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.825632 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75f8fbf54c-4622b"] Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.915635 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d9a34b1-6dcf-4c85-8518-210aa4c49591-dns-svc\") pod \"dnsmasq-dns-75f8fbf54c-4622b\" (UID: \"7d9a34b1-6dcf-4c85-8518-210aa4c49591\") " pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.915701 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d9a34b1-6dcf-4c85-8518-210aa4c49591-ovsdbserver-nb\") pod \"dnsmasq-dns-75f8fbf54c-4622b\" (UID: \"7d9a34b1-6dcf-4c85-8518-210aa4c49591\") " pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.915737 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85jgq\" (UniqueName: \"kubernetes.io/projected/7d9a34b1-6dcf-4c85-8518-210aa4c49591-kube-api-access-85jgq\") pod \"dnsmasq-dns-75f8fbf54c-4622b\" (UID: \"7d9a34b1-6dcf-4c85-8518-210aa4c49591\") " pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.915802 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d9a34b1-6dcf-4c85-8518-210aa4c49591-config\") pod \"dnsmasq-dns-75f8fbf54c-4622b\" (UID: \"7d9a34b1-6dcf-4c85-8518-210aa4c49591\") " pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.915823 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d9a34b1-6dcf-4c85-8518-210aa4c49591-ovsdbserver-sb\") pod \"dnsmasq-dns-75f8fbf54c-4622b\" (UID: \"7d9a34b1-6dcf-4c85-8518-210aa4c49591\") " pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.916897 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d9a34b1-6dcf-4c85-8518-210aa4c49591-ovsdbserver-sb\") pod \"dnsmasq-dns-75f8fbf54c-4622b\" (UID: \"7d9a34b1-6dcf-4c85-8518-210aa4c49591\") " pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.917389 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d9a34b1-6dcf-4c85-8518-210aa4c49591-ovsdbserver-nb\") pod \"dnsmasq-dns-75f8fbf54c-4622b\" (UID: \"7d9a34b1-6dcf-4c85-8518-210aa4c49591\") " pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.917970 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d9a34b1-6dcf-4c85-8518-210aa4c49591-config\") pod \"dnsmasq-dns-75f8fbf54c-4622b\" (UID: \"7d9a34b1-6dcf-4c85-8518-210aa4c49591\") " pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.918141 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d9a34b1-6dcf-4c85-8518-210aa4c49591-dns-svc\") pod \"dnsmasq-dns-75f8fbf54c-4622b\" (UID: \"7d9a34b1-6dcf-4c85-8518-210aa4c49591\") " pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.951220 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85jgq\" (UniqueName: \"kubernetes.io/projected/7d9a34b1-6dcf-4c85-8518-210aa4c49591-kube-api-access-85jgq\") pod \"dnsmasq-dns-75f8fbf54c-4622b\" (UID: \"7d9a34b1-6dcf-4c85-8518-210aa4c49591\") " pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.954103 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b588dc5c-k5ddq" event={"ID":"bfb50f6e-02f9-4b9d-b79a-c73439e842c5","Type":"ContainerDied","Data":"75f8b1f84b76a7371fd5b7a602a8d68354cdfab8a3f93a935b8aa66e1dc5054d"} Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.954158 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75f8b1f84b76a7371fd5b7a602a8d68354cdfab8a3f93a935b8aa66e1dc5054d" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.970773 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b588dc5c-k5ddq" Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.986762 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e","Type":"ContainerStarted","Data":"429b758d0aa013d702ca291a48232c385284202063330f2ddb15fdd44ab4ae8c"} Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.986817 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1e7b2f78-e48e-40c3-a0e9-d1b78608da3e","Type":"ContainerStarted","Data":"2ba4378b15dbfe95730fedfae284d9ce7be54e11ad1093ebe8dc68cf6f2db432"} Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.994969 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"de2d0c7c-d378-4a38-956d-56a576de5c21","Type":"ContainerStarted","Data":"d9932ec16bd69a77bb4af4b0c40d2c82a078aa359ca3fa24036c6414537d1360"} Dec 06 06:44:09 crc kubenswrapper[4823]: I1206 06:44:09.995243 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 06 06:44:10 crc kubenswrapper[4823]: I1206 06:44:10.049105 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.310754168 podStartE2EDuration="51.049074611s" podCreationTimestamp="2025-12-06 06:43:19 +0000 UTC" firstStartedPulling="2025-12-06 06:43:21.457517264 +0000 UTC m=+1102.743269224" lastFinishedPulling="2025-12-06 06:44:09.195837707 +0000 UTC m=+1150.481589667" observedRunningTime="2025-12-06 06:44:10.02381048 +0000 UTC m=+1151.309562440" watchObservedRunningTime="2025-12-06 06:44:10.049074611 +0000 UTC m=+1151.334826561" Dec 06 06:44:10 crc kubenswrapper[4823]: I1206 06:44:10.063110 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=40.259021248 podStartE2EDuration="44.063081907s" podCreationTimestamp="2025-12-06 06:43:26 +0000 UTC" firstStartedPulling="2025-12-06 06:44:02.021467823 +0000 UTC m=+1143.307219783" lastFinishedPulling="2025-12-06 06:44:05.825528482 +0000 UTC m=+1147.111280442" observedRunningTime="2025-12-06 06:44:10.054849188 +0000 UTC m=+1151.340601148" watchObservedRunningTime="2025-12-06 06:44:10.063081907 +0000 UTC m=+1151.348833867" Dec 06 06:44:10 crc kubenswrapper[4823]: I1206 06:44:10.118630 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb50f6e-02f9-4b9d-b79a-c73439e842c5-config\") pod \"bfb50f6e-02f9-4b9d-b79a-c73439e842c5\" (UID: \"bfb50f6e-02f9-4b9d-b79a-c73439e842c5\") " Dec 06 06:44:10 crc kubenswrapper[4823]: I1206 06:44:10.118818 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpwlt\" (UniqueName: \"kubernetes.io/projected/bfb50f6e-02f9-4b9d-b79a-c73439e842c5-kube-api-access-bpwlt\") pod \"bfb50f6e-02f9-4b9d-b79a-c73439e842c5\" (UID: \"bfb50f6e-02f9-4b9d-b79a-c73439e842c5\") " Dec 06 06:44:10 crc kubenswrapper[4823]: I1206 06:44:10.118985 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfb50f6e-02f9-4b9d-b79a-c73439e842c5-dns-svc\") pod \"bfb50f6e-02f9-4b9d-b79a-c73439e842c5\" (UID: \"bfb50f6e-02f9-4b9d-b79a-c73439e842c5\") " Dec 06 06:44:10 crc kubenswrapper[4823]: I1206 06:44:10.119499 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfb50f6e-02f9-4b9d-b79a-c73439e842c5-config" (OuterVolumeSpecName: "config") pod "bfb50f6e-02f9-4b9d-b79a-c73439e842c5" (UID: "bfb50f6e-02f9-4b9d-b79a-c73439e842c5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:10 crc kubenswrapper[4823]: I1206 06:44:10.119838 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb50f6e-02f9-4b9d-b79a-c73439e842c5-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:10 crc kubenswrapper[4823]: I1206 06:44:10.119954 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfb50f6e-02f9-4b9d-b79a-c73439e842c5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bfb50f6e-02f9-4b9d-b79a-c73439e842c5" (UID: "bfb50f6e-02f9-4b9d-b79a-c73439e842c5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:10 crc kubenswrapper[4823]: I1206 06:44:10.132261 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfb50f6e-02f9-4b9d-b79a-c73439e842c5-kube-api-access-bpwlt" (OuterVolumeSpecName: "kube-api-access-bpwlt") pod "bfb50f6e-02f9-4b9d-b79a-c73439e842c5" (UID: "bfb50f6e-02f9-4b9d-b79a-c73439e842c5"). InnerVolumeSpecName "kube-api-access-bpwlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:44:10 crc kubenswrapper[4823]: I1206 06:44:10.166457 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" Dec 06 06:44:10 crc kubenswrapper[4823]: I1206 06:44:10.221905 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfb50f6e-02f9-4b9d-b79a-c73439e842c5-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:10 crc kubenswrapper[4823]: I1206 06:44:10.221943 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpwlt\" (UniqueName: \"kubernetes.io/projected/bfb50f6e-02f9-4b9d-b79a-c73439e842c5-kube-api-access-bpwlt\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:10 crc kubenswrapper[4823]: I1206 06:44:10.321788 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d65f7695-zgtjz" Dec 06 06:44:10 crc kubenswrapper[4823]: I1206 06:44:10.426380 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stplq\" (UniqueName: \"kubernetes.io/projected/8267e31c-32b9-4640-90f9-9078920f64d5-kube-api-access-stplq\") pod \"8267e31c-32b9-4640-90f9-9078920f64d5\" (UID: \"8267e31c-32b9-4640-90f9-9078920f64d5\") " Dec 06 06:44:10 crc kubenswrapper[4823]: I1206 06:44:10.426585 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8267e31c-32b9-4640-90f9-9078920f64d5-config\") pod \"8267e31c-32b9-4640-90f9-9078920f64d5\" (UID: \"8267e31c-32b9-4640-90f9-9078920f64d5\") " Dec 06 06:44:10 crc kubenswrapper[4823]: I1206 06:44:10.426740 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8267e31c-32b9-4640-90f9-9078920f64d5-dns-svc\") pod \"8267e31c-32b9-4640-90f9-9078920f64d5\" (UID: \"8267e31c-32b9-4640-90f9-9078920f64d5\") " Dec 06 06:44:10 crc kubenswrapper[4823]: I1206 06:44:10.427038 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8267e31c-32b9-4640-90f9-9078920f64d5-config" (OuterVolumeSpecName: "config") pod "8267e31c-32b9-4640-90f9-9078920f64d5" (UID: "8267e31c-32b9-4640-90f9-9078920f64d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:10 crc kubenswrapper[4823]: I1206 06:44:10.427312 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8267e31c-32b9-4640-90f9-9078920f64d5-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:10 crc kubenswrapper[4823]: I1206 06:44:10.427477 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8267e31c-32b9-4640-90f9-9078920f64d5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8267e31c-32b9-4640-90f9-9078920f64d5" (UID: "8267e31c-32b9-4640-90f9-9078920f64d5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:10 crc kubenswrapper[4823]: I1206 06:44:10.432627 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8267e31c-32b9-4640-90f9-9078920f64d5-kube-api-access-stplq" (OuterVolumeSpecName: "kube-api-access-stplq") pod "8267e31c-32b9-4640-90f9-9078920f64d5" (UID: "8267e31c-32b9-4640-90f9-9078920f64d5"). InnerVolumeSpecName "kube-api-access-stplq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:44:10 crc kubenswrapper[4823]: I1206 06:44:10.489733 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86cf857789-g4nlb"] Dec 06 06:44:10 crc kubenswrapper[4823]: W1206 06:44:10.490357 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ba0acb8_4751_4dca_bda8_a7a3549ddf1f.slice/crio-10e5925d7ccd49fb2f4855ec2da0b0ed560cd2d2dc7757b92d31f8fb0d9df0c3 WatchSource:0}: Error finding container 10e5925d7ccd49fb2f4855ec2da0b0ed560cd2d2dc7757b92d31f8fb0d9df0c3: Status 404 returned error can't find the container with id 10e5925d7ccd49fb2f4855ec2da0b0ed560cd2d2dc7757b92d31f8fb0d9df0c3 Dec 06 06:44:10 crc kubenswrapper[4823]: I1206 06:44:10.528821 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8267e31c-32b9-4640-90f9-9078920f64d5-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:10 crc kubenswrapper[4823]: I1206 06:44:10.528869 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stplq\" (UniqueName: \"kubernetes.io/projected/8267e31c-32b9-4640-90f9-9078920f64d5-kube-api-access-stplq\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:10 crc kubenswrapper[4823]: I1206 06:44:10.686611 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75f8fbf54c-4622b"] Dec 06 06:44:10 crc kubenswrapper[4823]: W1206 06:44:10.694255 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d9a34b1_6dcf_4c85_8518_210aa4c49591.slice/crio-90843e9e4f8394e26da7c1089262dcd0c489d9ea8fad713fbaa3ed7bb1ae2cc5 WatchSource:0}: Error finding container 90843e9e4f8394e26da7c1089262dcd0c489d9ea8fad713fbaa3ed7bb1ae2cc5: Status 404 returned error can't find the container with id 90843e9e4f8394e26da7c1089262dcd0c489d9ea8fad713fbaa3ed7bb1ae2cc5 Dec 06 06:44:11 crc kubenswrapper[4823]: I1206 06:44:11.010366 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d65f7695-zgtjz" event={"ID":"8267e31c-32b9-4640-90f9-9078920f64d5","Type":"ContainerDied","Data":"dc76ef8b599d262188d904374be3ef912e686daafa903daba3e9e88ae682fa4c"} Dec 06 06:44:11 crc kubenswrapper[4823]: I1206 06:44:11.010426 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d65f7695-zgtjz" Dec 06 06:44:11 crc kubenswrapper[4823]: I1206 06:44:11.013515 4823 generic.go:334] "Generic (PLEG): container finished" podID="2ba0acb8-4751-4dca-bda8-a7a3549ddf1f" containerID="bfd358bcbea0c2b6e028160aafab0f0886653a3bdffa8b606434cfeef6dd6d30" exitCode=0 Dec 06 06:44:11 crc kubenswrapper[4823]: I1206 06:44:11.013610 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86cf857789-g4nlb" event={"ID":"2ba0acb8-4751-4dca-bda8-a7a3549ddf1f","Type":"ContainerDied","Data":"bfd358bcbea0c2b6e028160aafab0f0886653a3bdffa8b606434cfeef6dd6d30"} Dec 06 06:44:11 crc kubenswrapper[4823]: I1206 06:44:11.013645 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86cf857789-g4nlb" event={"ID":"2ba0acb8-4751-4dca-bda8-a7a3549ddf1f","Type":"ContainerStarted","Data":"10e5925d7ccd49fb2f4855ec2da0b0ed560cd2d2dc7757b92d31f8fb0d9df0c3"} Dec 06 06:44:11 crc kubenswrapper[4823]: I1206 06:44:11.017328 4823 generic.go:334] "Generic (PLEG): container finished" podID="7d9a34b1-6dcf-4c85-8518-210aa4c49591" containerID="68c978611404224898a6b1716ddaa7df0ff38880ef50e3a9df027dcb17308436" exitCode=0 Dec 06 06:44:11 crc kubenswrapper[4823]: I1206 06:44:11.017547 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" event={"ID":"7d9a34b1-6dcf-4c85-8518-210aa4c49591","Type":"ContainerDied","Data":"68c978611404224898a6b1716ddaa7df0ff38880ef50e3a9df027dcb17308436"} Dec 06 06:44:11 crc kubenswrapper[4823]: I1206 06:44:11.017679 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" event={"ID":"7d9a34b1-6dcf-4c85-8518-210aa4c49591","Type":"ContainerStarted","Data":"90843e9e4f8394e26da7c1089262dcd0c489d9ea8fad713fbaa3ed7bb1ae2cc5"} Dec 06 06:44:11 crc kubenswrapper[4823]: I1206 06:44:11.017693 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b588dc5c-k5ddq" Dec 06 06:44:11 crc kubenswrapper[4823]: I1206 06:44:11.180894 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d65f7695-zgtjz"] Dec 06 06:44:11 crc kubenswrapper[4823]: I1206 06:44:11.190811 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d65f7695-zgtjz"] Dec 06 06:44:11 crc kubenswrapper[4823]: I1206 06:44:11.229006 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b588dc5c-k5ddq"] Dec 06 06:44:11 crc kubenswrapper[4823]: I1206 06:44:11.248421 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b588dc5c-k5ddq"] Dec 06 06:44:12 crc kubenswrapper[4823]: I1206 06:44:12.026496 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86cf857789-g4nlb" event={"ID":"2ba0acb8-4751-4dca-bda8-a7a3549ddf1f","Type":"ContainerStarted","Data":"98b49a678f159e63ca28feb23d345fda169b002909ba7006fb0d3729d58ea5e1"} Dec 06 06:44:12 crc kubenswrapper[4823]: I1206 06:44:12.027966 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86cf857789-g4nlb" Dec 06 06:44:12 crc kubenswrapper[4823]: I1206 06:44:12.032003 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" event={"ID":"7d9a34b1-6dcf-4c85-8518-210aa4c49591","Type":"ContainerStarted","Data":"cd8342201160926ef9133129ae60929ca3ea4c6d7d31d321b0e90ae04a45418d"} Dec 06 06:44:12 crc kubenswrapper[4823]: I1206 06:44:12.032787 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" Dec 06 06:44:12 crc kubenswrapper[4823]: I1206 06:44:12.049543 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86cf857789-g4nlb" podStartSLOduration=2.977536535 podStartE2EDuration="3.049518729s" podCreationTimestamp="2025-12-06 06:44:09 +0000 UTC" firstStartedPulling="2025-12-06 06:44:10.499378754 +0000 UTC m=+1151.785130714" lastFinishedPulling="2025-12-06 06:44:10.571360948 +0000 UTC m=+1151.857112908" observedRunningTime="2025-12-06 06:44:12.044505194 +0000 UTC m=+1153.330257154" watchObservedRunningTime="2025-12-06 06:44:12.049518729 +0000 UTC m=+1153.335270689" Dec 06 06:44:12 crc kubenswrapper[4823]: I1206 06:44:12.075863 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" podStartSLOduration=3.075838341 podStartE2EDuration="3.075838341s" podCreationTimestamp="2025-12-06 06:44:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:44:12.068077277 +0000 UTC m=+1153.353829227" watchObservedRunningTime="2025-12-06 06:44:12.075838341 +0000 UTC m=+1153.361590301" Dec 06 06:44:12 crc kubenswrapper[4823]: I1206 06:44:12.753446 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 06 06:44:12 crc kubenswrapper[4823]: I1206 06:44:12.753861 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 06 06:44:12 crc kubenswrapper[4823]: I1206 06:44:12.798317 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 06 06:44:13 crc kubenswrapper[4823]: I1206 06:44:13.044654 4823 generic.go:334] "Generic (PLEG): container finished" podID="8e707833-acd6-49f7-91f7-a3ddd3a40119" containerID="af17a10c5565a9d2975eb472d3235cdaf713c2a497de2f75072f9961bf84f4b9" exitCode=0 Dec 06 06:44:13 crc kubenswrapper[4823]: I1206 06:44:13.044701 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8e707833-acd6-49f7-91f7-a3ddd3a40119","Type":"ContainerDied","Data":"af17a10c5565a9d2975eb472d3235cdaf713c2a497de2f75072f9961bf84f4b9"} Dec 06 06:44:13 crc kubenswrapper[4823]: I1206 06:44:13.161309 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8267e31c-32b9-4640-90f9-9078920f64d5" path="/var/lib/kubelet/pods/8267e31c-32b9-4640-90f9-9078920f64d5/volumes" Dec 06 06:44:13 crc kubenswrapper[4823]: I1206 06:44:13.162065 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfb50f6e-02f9-4b9d-b79a-c73439e842c5" path="/var/lib/kubelet/pods/bfb50f6e-02f9-4b9d-b79a-c73439e842c5/volumes" Dec 06 06:44:14 crc kubenswrapper[4823]: I1206 06:44:14.053874 4823 generic.go:334] "Generic (PLEG): container finished" podID="9da6c764-c7e5-4b0b-9d9f-8a5904f84187" containerID="b9b9fbca3e6400b27de82fd4143e586f7790dd1cc566af567a28744c055af69f" exitCode=0 Dec 06 06:44:14 crc kubenswrapper[4823]: I1206 06:44:14.053974 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9da6c764-c7e5-4b0b-9d9f-8a5904f84187","Type":"ContainerDied","Data":"b9b9fbca3e6400b27de82fd4143e586f7790dd1cc566af567a28744c055af69f"} Dec 06 06:44:14 crc kubenswrapper[4823]: I1206 06:44:14.058147 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"06a30022-1a67-4812-941e-3118f3767d35","Type":"ContainerStarted","Data":"1b49d94177b425103c3c16a280918922ccac676d6774ee3fdd8037fa6fd75eeb"} Dec 06 06:44:14 crc kubenswrapper[4823]: I1206 06:44:14.058515 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 06 06:44:14 crc kubenswrapper[4823]: I1206 06:44:14.064902 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8e707833-acd6-49f7-91f7-a3ddd3a40119","Type":"ContainerStarted","Data":"39bbb96b55a53a2af30595db66c980872a4b5a00fa1926a651108a655d4f995e"} Dec 06 06:44:14 crc kubenswrapper[4823]: I1206 06:44:14.106019 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.5594625989999997 podStartE2EDuration="57.105997399s" podCreationTimestamp="2025-12-06 06:43:17 +0000 UTC" firstStartedPulling="2025-12-06 06:43:18.759151095 +0000 UTC m=+1100.044903055" lastFinishedPulling="2025-12-06 06:44:13.305685895 +0000 UTC m=+1154.591437855" observedRunningTime="2025-12-06 06:44:14.09672416 +0000 UTC m=+1155.382476120" watchObservedRunningTime="2025-12-06 06:44:14.105997399 +0000 UTC m=+1155.391749359" Dec 06 06:44:14 crc kubenswrapper[4823]: I1206 06:44:14.121535 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 06 06:44:14 crc kubenswrapper[4823]: I1206 06:44:14.127586 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=11.530299401 podStartE2EDuration="59.127569433s" podCreationTimestamp="2025-12-06 06:43:15 +0000 UTC" firstStartedPulling="2025-12-06 06:43:17.952216391 +0000 UTC m=+1099.237968351" lastFinishedPulling="2025-12-06 06:44:05.549486423 +0000 UTC m=+1146.835238383" observedRunningTime="2025-12-06 06:44:14.118537982 +0000 UTC m=+1155.404289942" watchObservedRunningTime="2025-12-06 06:44:14.127569433 +0000 UTC m=+1155.413321393" Dec 06 06:44:15 crc kubenswrapper[4823]: I1206 06:44:15.075436 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9da6c764-c7e5-4b0b-9d9f-8a5904f84187","Type":"ContainerStarted","Data":"787c1c946064f5fe69395879b39618c0550b1b414deb76e2955a0529d188d7b8"} Dec 06 06:44:15 crc kubenswrapper[4823]: I1206 06:44:15.098506 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=12.651846352 podStartE2EDuration="1m1.098486184s" podCreationTimestamp="2025-12-06 06:43:14 +0000 UTC" firstStartedPulling="2025-12-06 06:43:17.378382595 +0000 UTC m=+1098.664134555" lastFinishedPulling="2025-12-06 06:44:05.825022427 +0000 UTC m=+1147.110774387" observedRunningTime="2025-12-06 06:44:15.093608283 +0000 UTC m=+1156.379360253" watchObservedRunningTime="2025-12-06 06:44:15.098486184 +0000 UTC m=+1156.384238144" Dec 06 06:44:15 crc kubenswrapper[4823]: I1206 06:44:15.513799 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 06 06:44:15 crc kubenswrapper[4823]: I1206 06:44:15.514385 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 06 06:44:16 crc kubenswrapper[4823]: I1206 06:44:16.087429 4823 generic.go:334] "Generic (PLEG): container finished" podID="d40b985e-9817-453a-8a4b-72d7eadf4683" containerID="0c4937a472ff3438ca571a336857409074a41ef08374ed5c4915823254918c6d" exitCode=0 Dec 06 06:44:16 crc kubenswrapper[4823]: I1206 06:44:16.087557 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d40b985e-9817-453a-8a4b-72d7eadf4683","Type":"ContainerDied","Data":"0c4937a472ff3438ca571a336857409074a41ef08374ed5c4915823254918c6d"} Dec 06 06:44:16 crc kubenswrapper[4823]: I1206 06:44:16.912155 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 06 06:44:16 crc kubenswrapper[4823]: I1206 06:44:16.912486 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 06 06:44:17 crc kubenswrapper[4823]: I1206 06:44:17.102184 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-s2c88" event={"ID":"5dfb6e3c-4b92-4e55-9c69-679dc2326717","Type":"ContainerStarted","Data":"2851cee1e0d02fcab87303fa7092069bff9b97ca94f3ed94078e80354e5f1d91"} Dec 06 06:44:17 crc kubenswrapper[4823]: I1206 06:44:17.106634 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3c7ecce4-d359-486f-9386-057202b69efd","Type":"ContainerStarted","Data":"9957dae732164f5c67fc2695ec4e15ce678f7bfdaa4f20525e0b217e88ca4f3e"} Dec 06 06:44:18 crc kubenswrapper[4823]: I1206 06:44:18.120449 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-94t86" event={"ID":"afe6c323-7053-4b9e-af90-27bb99d99ae3","Type":"ContainerStarted","Data":"55b18384feb0e3d819304651831c3138edfa001cb25d778b2c0d1ae64d94d42e"} Dec 06 06:44:18 crc kubenswrapper[4823]: I1206 06:44:18.121319 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-94t86" Dec 06 06:44:18 crc kubenswrapper[4823]: I1206 06:44:18.124156 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"b6649430-bcca-4949-82d4-f15ac31f36e1","Type":"ContainerStarted","Data":"0776206f34203b847fb0e2a163495a19a5df309bb43a5225640adb9c593c762e"} Dec 06 06:44:18 crc kubenswrapper[4823]: I1206 06:44:18.127754 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"807fbfb1-90fe-4325-a0ac-09b309c77172","Type":"ContainerStarted","Data":"7a5bed63e275100585b394ef20b463de889ed8df1d45c00b86053347a156377a"} Dec 06 06:44:18 crc kubenswrapper[4823]: I1206 06:44:18.130295 4823 generic.go:334] "Generic (PLEG): container finished" podID="5dfb6e3c-4b92-4e55-9c69-679dc2326717" containerID="2851cee1e0d02fcab87303fa7092069bff9b97ca94f3ed94078e80354e5f1d91" exitCode=0 Dec 06 06:44:18 crc kubenswrapper[4823]: I1206 06:44:18.130352 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-s2c88" event={"ID":"5dfb6e3c-4b92-4e55-9c69-679dc2326717","Type":"ContainerDied","Data":"2851cee1e0d02fcab87303fa7092069bff9b97ca94f3ed94078e80354e5f1d91"} Dec 06 06:44:18 crc kubenswrapper[4823]: I1206 06:44:18.144289 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-94t86" podStartSLOduration=3.42322612 podStartE2EDuration="56.144270186s" podCreationTimestamp="2025-12-06 06:43:22 +0000 UTC" firstStartedPulling="2025-12-06 06:43:24.579847851 +0000 UTC m=+1105.865599811" lastFinishedPulling="2025-12-06 06:44:17.300891917 +0000 UTC m=+1158.586643877" observedRunningTime="2025-12-06 06:44:18.1399113 +0000 UTC m=+1159.425663260" watchObservedRunningTime="2025-12-06 06:44:18.144270186 +0000 UTC m=+1159.430022146" Dec 06 06:44:19 crc kubenswrapper[4823]: I1206 06:44:19.152932 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-s2c88" event={"ID":"5dfb6e3c-4b92-4e55-9c69-679dc2326717","Type":"ContainerStarted","Data":"13dd3921709160228b77b8090737500f88391c83a758435f2908775421186f7c"} Dec 06 06:44:19 crc kubenswrapper[4823]: I1206 06:44:19.153297 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-s2c88" event={"ID":"5dfb6e3c-4b92-4e55-9c69-679dc2326717","Type":"ContainerStarted","Data":"09f3744e422ca2bf3b44454d447cbc0304b72b2953c392752e256a91a2bb33d5"} Dec 06 06:44:19 crc kubenswrapper[4823]: I1206 06:44:19.153625 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-s2c88" Dec 06 06:44:19 crc kubenswrapper[4823]: I1206 06:44:19.181901 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-s2c88" podStartSLOduration=6.5882463829999995 podStartE2EDuration="57.181876577s" podCreationTimestamp="2025-12-06 06:43:22 +0000 UTC" firstStartedPulling="2025-12-06 06:43:25.663589437 +0000 UTC m=+1106.949341397" lastFinishedPulling="2025-12-06 06:44:16.257219631 +0000 UTC m=+1157.542971591" observedRunningTime="2025-12-06 06:44:19.174168524 +0000 UTC m=+1160.459920484" watchObservedRunningTime="2025-12-06 06:44:19.181876577 +0000 UTC m=+1160.467628537" Dec 06 06:44:19 crc kubenswrapper[4823]: I1206 06:44:19.663069 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 06 06:44:19 crc kubenswrapper[4823]: I1206 06:44:19.664855 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 06 06:44:19 crc kubenswrapper[4823]: I1206 06:44:19.777062 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86cf857789-g4nlb" Dec 06 06:44:19 crc kubenswrapper[4823]: I1206 06:44:19.822685 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 06 06:44:20 crc kubenswrapper[4823]: I1206 06:44:20.157753 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-s2c88" Dec 06 06:44:20 crc kubenswrapper[4823]: I1206 06:44:20.171647 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" Dec 06 06:44:20 crc kubenswrapper[4823]: I1206 06:44:20.231299 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86cf857789-g4nlb"] Dec 06 06:44:20 crc kubenswrapper[4823]: I1206 06:44:20.231553 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86cf857789-g4nlb" podUID="2ba0acb8-4751-4dca-bda8-a7a3549ddf1f" containerName="dnsmasq-dns" containerID="cri-o://98b49a678f159e63ca28feb23d345fda169b002909ba7006fb0d3729d58ea5e1" gracePeriod=10 Dec 06 06:44:21 crc kubenswrapper[4823]: I1206 06:44:21.022631 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 06 06:44:21 crc kubenswrapper[4823]: I1206 06:44:21.155841 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 06 06:44:21 crc kubenswrapper[4823]: I1206 06:44:21.193094 4823 generic.go:334] "Generic (PLEG): container finished" podID="2ba0acb8-4751-4dca-bda8-a7a3549ddf1f" containerID="98b49a678f159e63ca28feb23d345fda169b002909ba7006fb0d3729d58ea5e1" exitCode=0 Dec 06 06:44:21 crc kubenswrapper[4823]: I1206 06:44:21.193205 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86cf857789-g4nlb" event={"ID":"2ba0acb8-4751-4dca-bda8-a7a3549ddf1f","Type":"ContainerDied","Data":"98b49a678f159e63ca28feb23d345fda169b002909ba7006fb0d3729d58ea5e1"} Dec 06 06:44:22 crc kubenswrapper[4823]: I1206 06:44:22.669008 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 06 06:44:23 crc kubenswrapper[4823]: I1206 06:44:23.202190 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86cf857789-g4nlb" Dec 06 06:44:23 crc kubenswrapper[4823]: I1206 06:44:23.235641 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86cf857789-g4nlb" event={"ID":"2ba0acb8-4751-4dca-bda8-a7a3549ddf1f","Type":"ContainerDied","Data":"10e5925d7ccd49fb2f4855ec2da0b0ed560cd2d2dc7757b92d31f8fb0d9df0c3"} Dec 06 06:44:23 crc kubenswrapper[4823]: I1206 06:44:23.236086 4823 scope.go:117] "RemoveContainer" containerID="98b49a678f159e63ca28feb23d345fda169b002909ba7006fb0d3729d58ea5e1" Dec 06 06:44:23 crc kubenswrapper[4823]: I1206 06:44:23.236314 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86cf857789-g4nlb" Dec 06 06:44:23 crc kubenswrapper[4823]: I1206 06:44:23.265135 4823 scope.go:117] "RemoveContainer" containerID="bfd358bcbea0c2b6e028160aafab0f0886653a3bdffa8b606434cfeef6dd6d30" Dec 06 06:44:23 crc kubenswrapper[4823]: I1206 06:44:23.339252 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba0acb8-4751-4dca-bda8-a7a3549ddf1f-config\") pod \"2ba0acb8-4751-4dca-bda8-a7a3549ddf1f\" (UID: \"2ba0acb8-4751-4dca-bda8-a7a3549ddf1f\") " Dec 06 06:44:23 crc kubenswrapper[4823]: I1206 06:44:23.339413 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfktx\" (UniqueName: \"kubernetes.io/projected/2ba0acb8-4751-4dca-bda8-a7a3549ddf1f-kube-api-access-lfktx\") pod \"2ba0acb8-4751-4dca-bda8-a7a3549ddf1f\" (UID: \"2ba0acb8-4751-4dca-bda8-a7a3549ddf1f\") " Dec 06 06:44:23 crc kubenswrapper[4823]: I1206 06:44:23.339650 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ba0acb8-4751-4dca-bda8-a7a3549ddf1f-ovsdbserver-sb\") pod \"2ba0acb8-4751-4dca-bda8-a7a3549ddf1f\" (UID: \"2ba0acb8-4751-4dca-bda8-a7a3549ddf1f\") " Dec 06 06:44:23 crc kubenswrapper[4823]: I1206 06:44:23.339712 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ba0acb8-4751-4dca-bda8-a7a3549ddf1f-dns-svc\") pod \"2ba0acb8-4751-4dca-bda8-a7a3549ddf1f\" (UID: \"2ba0acb8-4751-4dca-bda8-a7a3549ddf1f\") " Dec 06 06:44:23 crc kubenswrapper[4823]: I1206 06:44:23.347594 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ba0acb8-4751-4dca-bda8-a7a3549ddf1f-kube-api-access-lfktx" (OuterVolumeSpecName: "kube-api-access-lfktx") pod "2ba0acb8-4751-4dca-bda8-a7a3549ddf1f" (UID: "2ba0acb8-4751-4dca-bda8-a7a3549ddf1f"). InnerVolumeSpecName "kube-api-access-lfktx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:44:23 crc kubenswrapper[4823]: I1206 06:44:23.393446 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ba0acb8-4751-4dca-bda8-a7a3549ddf1f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2ba0acb8-4751-4dca-bda8-a7a3549ddf1f" (UID: "2ba0acb8-4751-4dca-bda8-a7a3549ddf1f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:23 crc kubenswrapper[4823]: I1206 06:44:23.397401 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ba0acb8-4751-4dca-bda8-a7a3549ddf1f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2ba0acb8-4751-4dca-bda8-a7a3549ddf1f" (UID: "2ba0acb8-4751-4dca-bda8-a7a3549ddf1f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:23 crc kubenswrapper[4823]: I1206 06:44:23.402553 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ba0acb8-4751-4dca-bda8-a7a3549ddf1f-config" (OuterVolumeSpecName: "config") pod "2ba0acb8-4751-4dca-bda8-a7a3549ddf1f" (UID: "2ba0acb8-4751-4dca-bda8-a7a3549ddf1f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:23 crc kubenswrapper[4823]: I1206 06:44:23.442165 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfktx\" (UniqueName: \"kubernetes.io/projected/2ba0acb8-4751-4dca-bda8-a7a3549ddf1f-kube-api-access-lfktx\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:23 crc kubenswrapper[4823]: I1206 06:44:23.442454 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ba0acb8-4751-4dca-bda8-a7a3549ddf1f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:23 crc kubenswrapper[4823]: I1206 06:44:23.442591 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ba0acb8-4751-4dca-bda8-a7a3549ddf1f-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:23 crc kubenswrapper[4823]: I1206 06:44:23.442705 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba0acb8-4751-4dca-bda8-a7a3549ddf1f-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:23 crc kubenswrapper[4823]: I1206 06:44:23.579171 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86cf857789-g4nlb"] Dec 06 06:44:23 crc kubenswrapper[4823]: I1206 06:44:23.588385 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86cf857789-g4nlb"] Dec 06 06:44:24 crc kubenswrapper[4823]: I1206 06:44:24.246438 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0db13557-99bd-4223-a8f1-53de273f6ba3","Type":"ContainerStarted","Data":"c26d186af1975d78ac4d0f36fd6a2d63c7b4b93c95352f49c2a22331b94e030d"} Dec 06 06:44:24 crc kubenswrapper[4823]: I1206 06:44:24.251498 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d40b985e-9817-453a-8a4b-72d7eadf4683","Type":"ContainerStarted","Data":"748e10f51e3bc3cbaaff2f08ca46a535d8d3bc3c5a5936d68015e5fac01bb798"} Dec 06 06:44:24 crc kubenswrapper[4823]: I1206 06:44:24.285584 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=6.204785524 podStartE2EDuration="1m2.285560309s" podCreationTimestamp="2025-12-06 06:43:22 +0000 UTC" firstStartedPulling="2025-12-06 06:43:26.864566726 +0000 UTC m=+1108.150318686" lastFinishedPulling="2025-12-06 06:44:22.945341511 +0000 UTC m=+1164.231093471" observedRunningTime="2025-12-06 06:44:24.279318229 +0000 UTC m=+1165.565070189" watchObservedRunningTime="2025-12-06 06:44:24.285560309 +0000 UTC m=+1165.571312269" Dec 06 06:44:24 crc kubenswrapper[4823]: I1206 06:44:24.477481 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 06 06:44:24 crc kubenswrapper[4823]: I1206 06:44:24.477615 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 06 06:44:25 crc kubenswrapper[4823]: I1206 06:44:25.180173 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ba0acb8-4751-4dca-bda8-a7a3549ddf1f" path="/var/lib/kubelet/pods/2ba0acb8-4751-4dca-bda8-a7a3549ddf1f/volumes" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.187515 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-13a0-account-create-update-rnzvp"] Dec 06 06:44:27 crc kubenswrapper[4823]: E1206 06:44:27.188474 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba0acb8-4751-4dca-bda8-a7a3549ddf1f" containerName="init" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.188487 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba0acb8-4751-4dca-bda8-a7a3549ddf1f" containerName="init" Dec 06 06:44:27 crc kubenswrapper[4823]: E1206 06:44:27.188505 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba0acb8-4751-4dca-bda8-a7a3549ddf1f" containerName="dnsmasq-dns" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.188511 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba0acb8-4751-4dca-bda8-a7a3549ddf1f" containerName="dnsmasq-dns" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.188733 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ba0acb8-4751-4dca-bda8-a7a3549ddf1f" containerName="dnsmasq-dns" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.189388 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-13a0-account-create-update-rnzvp" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.191643 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.201451 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-13a0-account-create-update-rnzvp"] Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.249908 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-cf22j"] Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.251437 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cf22j" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.257277 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-cf22j"] Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.285225 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d40b985e-9817-453a-8a4b-72d7eadf4683","Type":"ContainerStarted","Data":"6fc3fc1bd5e402f68cccf2da3cce4281bf07a01590034a05387c118a061ebfb0"} Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.326218 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b2687bc-8979-48a0-8a02-8e6cd5f62b0b-operator-scripts\") pod \"keystone-13a0-account-create-update-rnzvp\" (UID: \"6b2687bc-8979-48a0-8a02-8e6cd5f62b0b\") " pod="openstack/keystone-13a0-account-create-update-rnzvp" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.326353 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9lxg\" (UniqueName: \"kubernetes.io/projected/6b2687bc-8979-48a0-8a02-8e6cd5f62b0b-kube-api-access-v9lxg\") pod \"keystone-13a0-account-create-update-rnzvp\" (UID: \"6b2687bc-8979-48a0-8a02-8e6cd5f62b0b\") " pod="openstack/keystone-13a0-account-create-update-rnzvp" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.428126 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9da3e65f-fbe0-4373-a601-8408a5f4f033-operator-scripts\") pod \"keystone-db-create-cf22j\" (UID: \"9da3e65f-fbe0-4373-a601-8408a5f4f033\") " pod="openstack/keystone-db-create-cf22j" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.428247 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9lxg\" (UniqueName: \"kubernetes.io/projected/6b2687bc-8979-48a0-8a02-8e6cd5f62b0b-kube-api-access-v9lxg\") pod \"keystone-13a0-account-create-update-rnzvp\" (UID: \"6b2687bc-8979-48a0-8a02-8e6cd5f62b0b\") " pod="openstack/keystone-13a0-account-create-update-rnzvp" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.428398 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh2fb\" (UniqueName: \"kubernetes.io/projected/9da3e65f-fbe0-4373-a601-8408a5f4f033-kube-api-access-qh2fb\") pod \"keystone-db-create-cf22j\" (UID: \"9da3e65f-fbe0-4373-a601-8408a5f4f033\") " pod="openstack/keystone-db-create-cf22j" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.428468 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b2687bc-8979-48a0-8a02-8e6cd5f62b0b-operator-scripts\") pod \"keystone-13a0-account-create-update-rnzvp\" (UID: \"6b2687bc-8979-48a0-8a02-8e6cd5f62b0b\") " pod="openstack/keystone-13a0-account-create-update-rnzvp" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.447350 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b2687bc-8979-48a0-8a02-8e6cd5f62b0b-operator-scripts\") pod \"keystone-13a0-account-create-update-rnzvp\" (UID: \"6b2687bc-8979-48a0-8a02-8e6cd5f62b0b\") " pod="openstack/keystone-13a0-account-create-update-rnzvp" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.473133 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9lxg\" (UniqueName: \"kubernetes.io/projected/6b2687bc-8979-48a0-8a02-8e6cd5f62b0b-kube-api-access-v9lxg\") pod \"keystone-13a0-account-create-update-rnzvp\" (UID: \"6b2687bc-8979-48a0-8a02-8e6cd5f62b0b\") " pod="openstack/keystone-13a0-account-create-update-rnzvp" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.476922 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-xlp49"] Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.478499 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xlp49" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.498978 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-xlp49"] Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.530717 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.532522 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-13a0-account-create-update-rnzvp" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.532882 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh2fb\" (UniqueName: \"kubernetes.io/projected/9da3e65f-fbe0-4373-a601-8408a5f4f033-kube-api-access-qh2fb\") pod \"keystone-db-create-cf22j\" (UID: \"9da3e65f-fbe0-4373-a601-8408a5f4f033\") " pod="openstack/keystone-db-create-cf22j" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.533043 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9da3e65f-fbe0-4373-a601-8408a5f4f033-operator-scripts\") pod \"keystone-db-create-cf22j\" (UID: \"9da3e65f-fbe0-4373-a601-8408a5f4f033\") " pod="openstack/keystone-db-create-cf22j" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.534753 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9da3e65f-fbe0-4373-a601-8408a5f4f033-operator-scripts\") pod \"keystone-db-create-cf22j\" (UID: \"9da3e65f-fbe0-4373-a601-8408a5f4f033\") " pod="openstack/keystone-db-create-cf22j" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.559762 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh2fb\" (UniqueName: \"kubernetes.io/projected/9da3e65f-fbe0-4373-a601-8408a5f4f033-kube-api-access-qh2fb\") pod \"keystone-db-create-cf22j\" (UID: \"9da3e65f-fbe0-4373-a601-8408a5f4f033\") " pod="openstack/keystone-db-create-cf22j" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.581645 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cf22j" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.605398 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-3a9a-account-create-update-ckvgt"] Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.609796 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3a9a-account-create-update-ckvgt" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.613968 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.621209 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3a9a-account-create-update-ckvgt"] Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.635476 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2cpd\" (UniqueName: \"kubernetes.io/projected/1333c4a8-9d88-4ba6-b00c-22b790673422-kube-api-access-t2cpd\") pod \"placement-db-create-xlp49\" (UID: \"1333c4a8-9d88-4ba6-b00c-22b790673422\") " pod="openstack/placement-db-create-xlp49" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.635599 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1333c4a8-9d88-4ba6-b00c-22b790673422-operator-scripts\") pod \"placement-db-create-xlp49\" (UID: \"1333c4a8-9d88-4ba6-b00c-22b790673422\") " pod="openstack/placement-db-create-xlp49" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.737058 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09e414a4-1aba-4797-a384-ed802cb06e0c-operator-scripts\") pod \"placement-3a9a-account-create-update-ckvgt\" (UID: \"09e414a4-1aba-4797-a384-ed802cb06e0c\") " pod="openstack/placement-3a9a-account-create-update-ckvgt" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.737190 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2cpd\" (UniqueName: \"kubernetes.io/projected/1333c4a8-9d88-4ba6-b00c-22b790673422-kube-api-access-t2cpd\") pod \"placement-db-create-xlp49\" (UID: \"1333c4a8-9d88-4ba6-b00c-22b790673422\") " pod="openstack/placement-db-create-xlp49" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.737571 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1333c4a8-9d88-4ba6-b00c-22b790673422-operator-scripts\") pod \"placement-db-create-xlp49\" (UID: \"1333c4a8-9d88-4ba6-b00c-22b790673422\") " pod="openstack/placement-db-create-xlp49" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.737779 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4tzk\" (UniqueName: \"kubernetes.io/projected/09e414a4-1aba-4797-a384-ed802cb06e0c-kube-api-access-z4tzk\") pod \"placement-3a9a-account-create-update-ckvgt\" (UID: \"09e414a4-1aba-4797-a384-ed802cb06e0c\") " pod="openstack/placement-3a9a-account-create-update-ckvgt" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.738954 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1333c4a8-9d88-4ba6-b00c-22b790673422-operator-scripts\") pod \"placement-db-create-xlp49\" (UID: \"1333c4a8-9d88-4ba6-b00c-22b790673422\") " pod="openstack/placement-db-create-xlp49" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.764336 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2cpd\" (UniqueName: \"kubernetes.io/projected/1333c4a8-9d88-4ba6-b00c-22b790673422-kube-api-access-t2cpd\") pod \"placement-db-create-xlp49\" (UID: \"1333c4a8-9d88-4ba6-b00c-22b790673422\") " pod="openstack/placement-db-create-xlp49" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.825436 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xlp49" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.839789 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4tzk\" (UniqueName: \"kubernetes.io/projected/09e414a4-1aba-4797-a384-ed802cb06e0c-kube-api-access-z4tzk\") pod \"placement-3a9a-account-create-update-ckvgt\" (UID: \"09e414a4-1aba-4797-a384-ed802cb06e0c\") " pod="openstack/placement-3a9a-account-create-update-ckvgt" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.839869 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09e414a4-1aba-4797-a384-ed802cb06e0c-operator-scripts\") pod \"placement-3a9a-account-create-update-ckvgt\" (UID: \"09e414a4-1aba-4797-a384-ed802cb06e0c\") " pod="openstack/placement-3a9a-account-create-update-ckvgt" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.840862 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09e414a4-1aba-4797-a384-ed802cb06e0c-operator-scripts\") pod \"placement-3a9a-account-create-update-ckvgt\" (UID: \"09e414a4-1aba-4797-a384-ed802cb06e0c\") " pod="openstack/placement-3a9a-account-create-update-ckvgt" Dec 06 06:44:27 crc kubenswrapper[4823]: I1206 06:44:27.863951 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4tzk\" (UniqueName: \"kubernetes.io/projected/09e414a4-1aba-4797-a384-ed802cb06e0c-kube-api-access-z4tzk\") pod \"placement-3a9a-account-create-update-ckvgt\" (UID: \"09e414a4-1aba-4797-a384-ed802cb06e0c\") " pod="openstack/placement-3a9a-account-create-update-ckvgt" Dec 06 06:44:28 crc kubenswrapper[4823]: I1206 06:44:28.033503 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3a9a-account-create-update-ckvgt" Dec 06 06:44:28 crc kubenswrapper[4823]: I1206 06:44:28.064631 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-13a0-account-create-update-rnzvp"] Dec 06 06:44:28 crc kubenswrapper[4823]: I1206 06:44:28.122869 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-xlp49"] Dec 06 06:44:28 crc kubenswrapper[4823]: W1206 06:44:28.127268 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1333c4a8_9d88_4ba6_b00c_22b790673422.slice/crio-1ff4de22b0c44bdd313e3b62c978ef55802bbff9620aef577fc0f53e9eaed798 WatchSource:0}: Error finding container 1ff4de22b0c44bdd313e3b62c978ef55802bbff9620aef577fc0f53e9eaed798: Status 404 returned error can't find the container with id 1ff4de22b0c44bdd313e3b62c978ef55802bbff9620aef577fc0f53e9eaed798 Dec 06 06:44:28 crc kubenswrapper[4823]: I1206 06:44:28.165403 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-cf22j"] Dec 06 06:44:28 crc kubenswrapper[4823]: I1206 06:44:28.300525 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xlp49" event={"ID":"1333c4a8-9d88-4ba6-b00c-22b790673422","Type":"ContainerStarted","Data":"1ff4de22b0c44bdd313e3b62c978ef55802bbff9620aef577fc0f53e9eaed798"} Dec 06 06:44:28 crc kubenswrapper[4823]: I1206 06:44:28.304547 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-cf22j" event={"ID":"9da3e65f-fbe0-4373-a601-8408a5f4f033","Type":"ContainerStarted","Data":"6f6595e919090b8f054a5d185347eb52bb74dd895609424737860220556aba4e"} Dec 06 06:44:28 crc kubenswrapper[4823]: I1206 06:44:28.306527 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-13a0-account-create-update-rnzvp" event={"ID":"6b2687bc-8979-48a0-8a02-8e6cd5f62b0b","Type":"ContainerStarted","Data":"b7456e8e24167f3bdbcf9e233f23fc895fdd5494870dfc7c190ee24d72e289df"} Dec 06 06:44:28 crc kubenswrapper[4823]: I1206 06:44:28.306564 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-13a0-account-create-update-rnzvp" event={"ID":"6b2687bc-8979-48a0-8a02-8e6cd5f62b0b","Type":"ContainerStarted","Data":"7c0dd10a49b128396fe9e5c350eda0f40b7d39efee1bd776465d9e3ac2002d22"} Dec 06 06:44:28 crc kubenswrapper[4823]: I1206 06:44:28.592122 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3a9a-account-create-update-ckvgt"] Dec 06 06:44:28 crc kubenswrapper[4823]: W1206 06:44:28.594847 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09e414a4_1aba_4797_a384_ed802cb06e0c.slice/crio-0fe0421e847b77fc793fddf6bf475e1ba86bfc2ba8d6a7b1617693c25674e7a4 WatchSource:0}: Error finding container 0fe0421e847b77fc793fddf6bf475e1ba86bfc2ba8d6a7b1617693c25674e7a4: Status 404 returned error can't find the container with id 0fe0421e847b77fc793fddf6bf475e1ba86bfc2ba8d6a7b1617693c25674e7a4 Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.335015 4823 generic.go:334] "Generic (PLEG): container finished" podID="6b2687bc-8979-48a0-8a02-8e6cd5f62b0b" containerID="b7456e8e24167f3bdbcf9e233f23fc895fdd5494870dfc7c190ee24d72e289df" exitCode=0 Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.335422 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-13a0-account-create-update-rnzvp" event={"ID":"6b2687bc-8979-48a0-8a02-8e6cd5f62b0b","Type":"ContainerDied","Data":"b7456e8e24167f3bdbcf9e233f23fc895fdd5494870dfc7c190ee24d72e289df"} Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.348445 4823 generic.go:334] "Generic (PLEG): container finished" podID="1333c4a8-9d88-4ba6-b00c-22b790673422" containerID="498fdfef078b7efaf30fff9e77af1b0a1f26f1a143440a3210e34fd12e32fd89" exitCode=0 Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.348535 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xlp49" event={"ID":"1333c4a8-9d88-4ba6-b00c-22b790673422","Type":"ContainerDied","Data":"498fdfef078b7efaf30fff9e77af1b0a1f26f1a143440a3210e34fd12e32fd89"} Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.368901 4823 generic.go:334] "Generic (PLEG): container finished" podID="09e414a4-1aba-4797-a384-ed802cb06e0c" containerID="f57a11193ba25165963caae7db61751322a38eee89586673547181edfc2f461e" exitCode=0 Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.369001 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3a9a-account-create-update-ckvgt" event={"ID":"09e414a4-1aba-4797-a384-ed802cb06e0c","Type":"ContainerDied","Data":"f57a11193ba25165963caae7db61751322a38eee89586673547181edfc2f461e"} Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.369031 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3a9a-account-create-update-ckvgt" event={"ID":"09e414a4-1aba-4797-a384-ed802cb06e0c","Type":"ContainerStarted","Data":"0fe0421e847b77fc793fddf6bf475e1ba86bfc2ba8d6a7b1617693c25674e7a4"} Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.376638 4823 generic.go:334] "Generic (PLEG): container finished" podID="9da3e65f-fbe0-4373-a601-8408a5f4f033" containerID="1a122802b70e127930b1a12cb9e773513a90c35d2af3abb1e23937d03777ddd5" exitCode=0 Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.376709 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-cf22j" event={"ID":"9da3e65f-fbe0-4373-a601-8408a5f4f033","Type":"ContainerDied","Data":"1a122802b70e127930b1a12cb9e773513a90c35d2af3abb1e23937d03777ddd5"} Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.483308 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68b6cd6f45-wj5b8"] Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.495514 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.536585 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68b6cd6f45-wj5b8"] Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.575162 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwcsm\" (UniqueName: \"kubernetes.io/projected/eabecdc3-1a42-4340-9bf7-6fccb70224b3-kube-api-access-pwcsm\") pod \"dnsmasq-dns-68b6cd6f45-wj5b8\" (UID: \"eabecdc3-1a42-4340-9bf7-6fccb70224b3\") " pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.575305 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eabecdc3-1a42-4340-9bf7-6fccb70224b3-dns-svc\") pod \"dnsmasq-dns-68b6cd6f45-wj5b8\" (UID: \"eabecdc3-1a42-4340-9bf7-6fccb70224b3\") " pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.575357 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eabecdc3-1a42-4340-9bf7-6fccb70224b3-ovsdbserver-sb\") pod \"dnsmasq-dns-68b6cd6f45-wj5b8\" (UID: \"eabecdc3-1a42-4340-9bf7-6fccb70224b3\") " pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.575405 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eabecdc3-1a42-4340-9bf7-6fccb70224b3-config\") pod \"dnsmasq-dns-68b6cd6f45-wj5b8\" (UID: \"eabecdc3-1a42-4340-9bf7-6fccb70224b3\") " pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.575522 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eabecdc3-1a42-4340-9bf7-6fccb70224b3-ovsdbserver-nb\") pod \"dnsmasq-dns-68b6cd6f45-wj5b8\" (UID: \"eabecdc3-1a42-4340-9bf7-6fccb70224b3\") " pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.615969 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.677683 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eabecdc3-1a42-4340-9bf7-6fccb70224b3-ovsdbserver-nb\") pod \"dnsmasq-dns-68b6cd6f45-wj5b8\" (UID: \"eabecdc3-1a42-4340-9bf7-6fccb70224b3\") " pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.677849 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwcsm\" (UniqueName: \"kubernetes.io/projected/eabecdc3-1a42-4340-9bf7-6fccb70224b3-kube-api-access-pwcsm\") pod \"dnsmasq-dns-68b6cd6f45-wj5b8\" (UID: \"eabecdc3-1a42-4340-9bf7-6fccb70224b3\") " pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.677938 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eabecdc3-1a42-4340-9bf7-6fccb70224b3-dns-svc\") pod \"dnsmasq-dns-68b6cd6f45-wj5b8\" (UID: \"eabecdc3-1a42-4340-9bf7-6fccb70224b3\") " pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.677974 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eabecdc3-1a42-4340-9bf7-6fccb70224b3-ovsdbserver-sb\") pod \"dnsmasq-dns-68b6cd6f45-wj5b8\" (UID: \"eabecdc3-1a42-4340-9bf7-6fccb70224b3\") " pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.678023 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eabecdc3-1a42-4340-9bf7-6fccb70224b3-config\") pod \"dnsmasq-dns-68b6cd6f45-wj5b8\" (UID: \"eabecdc3-1a42-4340-9bf7-6fccb70224b3\") " pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.679586 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eabecdc3-1a42-4340-9bf7-6fccb70224b3-config\") pod \"dnsmasq-dns-68b6cd6f45-wj5b8\" (UID: \"eabecdc3-1a42-4340-9bf7-6fccb70224b3\") " pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.680551 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eabecdc3-1a42-4340-9bf7-6fccb70224b3-ovsdbserver-nb\") pod \"dnsmasq-dns-68b6cd6f45-wj5b8\" (UID: \"eabecdc3-1a42-4340-9bf7-6fccb70224b3\") " pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.682329 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eabecdc3-1a42-4340-9bf7-6fccb70224b3-dns-svc\") pod \"dnsmasq-dns-68b6cd6f45-wj5b8\" (UID: \"eabecdc3-1a42-4340-9bf7-6fccb70224b3\") " pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.682976 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eabecdc3-1a42-4340-9bf7-6fccb70224b3-ovsdbserver-sb\") pod \"dnsmasq-dns-68b6cd6f45-wj5b8\" (UID: \"eabecdc3-1a42-4340-9bf7-6fccb70224b3\") " pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.707750 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-f6wtx"] Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.709835 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-f6wtx" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.724609 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-f6wtx"] Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.735715 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwcsm\" (UniqueName: \"kubernetes.io/projected/eabecdc3-1a42-4340-9bf7-6fccb70224b3-kube-api-access-pwcsm\") pod \"dnsmasq-dns-68b6cd6f45-wj5b8\" (UID: \"eabecdc3-1a42-4340-9bf7-6fccb70224b3\") " pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.746116 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-3d74-account-create-update-kz9kx"] Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.747637 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-3d74-account-create-update-kz9kx" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.759183 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-3d74-account-create-update-kz9kx"] Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.761481 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.780119 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l4s4\" (UniqueName: \"kubernetes.io/projected/ef7b8301-6cf7-4fbf-8968-e26088f8b144-kube-api-access-8l4s4\") pod \"watcher-db-create-f6wtx\" (UID: \"ef7b8301-6cf7-4fbf-8968-e26088f8b144\") " pod="openstack/watcher-db-create-f6wtx" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.780235 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef7b8301-6cf7-4fbf-8968-e26088f8b144-operator-scripts\") pod \"watcher-db-create-f6wtx\" (UID: \"ef7b8301-6cf7-4fbf-8968-e26088f8b144\") " pod="openstack/watcher-db-create-f6wtx" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.848096 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.903264 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l4s4\" (UniqueName: \"kubernetes.io/projected/ef7b8301-6cf7-4fbf-8968-e26088f8b144-kube-api-access-8l4s4\") pod \"watcher-db-create-f6wtx\" (UID: \"ef7b8301-6cf7-4fbf-8968-e26088f8b144\") " pod="openstack/watcher-db-create-f6wtx" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.903781 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef7b8301-6cf7-4fbf-8968-e26088f8b144-operator-scripts\") pod \"watcher-db-create-f6wtx\" (UID: \"ef7b8301-6cf7-4fbf-8968-e26088f8b144\") " pod="openstack/watcher-db-create-f6wtx" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.903913 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2f3c4a2-8111-4fcc-a387-e91f191804e8-operator-scripts\") pod \"watcher-3d74-account-create-update-kz9kx\" (UID: \"b2f3c4a2-8111-4fcc-a387-e91f191804e8\") " pod="openstack/watcher-3d74-account-create-update-kz9kx" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.904231 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xkh9\" (UniqueName: \"kubernetes.io/projected/b2f3c4a2-8111-4fcc-a387-e91f191804e8-kube-api-access-9xkh9\") pod \"watcher-3d74-account-create-update-kz9kx\" (UID: \"b2f3c4a2-8111-4fcc-a387-e91f191804e8\") " pod="openstack/watcher-3d74-account-create-update-kz9kx" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.905810 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef7b8301-6cf7-4fbf-8968-e26088f8b144-operator-scripts\") pod \"watcher-db-create-f6wtx\" (UID: \"ef7b8301-6cf7-4fbf-8968-e26088f8b144\") " pod="openstack/watcher-db-create-f6wtx" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.935164 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l4s4\" (UniqueName: \"kubernetes.io/projected/ef7b8301-6cf7-4fbf-8968-e26088f8b144-kube-api-access-8l4s4\") pod \"watcher-db-create-f6wtx\" (UID: \"ef7b8301-6cf7-4fbf-8968-e26088f8b144\") " pod="openstack/watcher-db-create-f6wtx" Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.993268 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 06 06:44:29 crc kubenswrapper[4823]: I1206 06:44:29.995244 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.005988 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.006297 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.006495 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-gvdcc" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.007279 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.008472 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xkh9\" (UniqueName: \"kubernetes.io/projected/b2f3c4a2-8111-4fcc-a387-e91f191804e8-kube-api-access-9xkh9\") pod \"watcher-3d74-account-create-update-kz9kx\" (UID: \"b2f3c4a2-8111-4fcc-a387-e91f191804e8\") " pod="openstack/watcher-3d74-account-create-update-kz9kx" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.008612 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2f3c4a2-8111-4fcc-a387-e91f191804e8-operator-scripts\") pod \"watcher-3d74-account-create-update-kz9kx\" (UID: \"b2f3c4a2-8111-4fcc-a387-e91f191804e8\") " pod="openstack/watcher-3d74-account-create-update-kz9kx" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.009816 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2f3c4a2-8111-4fcc-a387-e91f191804e8-operator-scripts\") pod \"watcher-3d74-account-create-update-kz9kx\" (UID: \"b2f3c4a2-8111-4fcc-a387-e91f191804e8\") " pod="openstack/watcher-3d74-account-create-update-kz9kx" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.018847 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.040809 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xkh9\" (UniqueName: \"kubernetes.io/projected/b2f3c4a2-8111-4fcc-a387-e91f191804e8-kube-api-access-9xkh9\") pod \"watcher-3d74-account-create-update-kz9kx\" (UID: \"b2f3c4a2-8111-4fcc-a387-e91f191804e8\") " pod="openstack/watcher-3d74-account-create-update-kz9kx" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.110180 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/37bce594-0e2b-42f2-affd-892bd457c1b2-scripts\") pod \"ovn-northd-0\" (UID: \"37bce594-0e2b-42f2-affd-892bd457c1b2\") " pod="openstack/ovn-northd-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.110237 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37bce594-0e2b-42f2-affd-892bd457c1b2-config\") pod \"ovn-northd-0\" (UID: \"37bce594-0e2b-42f2-affd-892bd457c1b2\") " pod="openstack/ovn-northd-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.110329 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37bce594-0e2b-42f2-affd-892bd457c1b2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"37bce594-0e2b-42f2-affd-892bd457c1b2\") " pod="openstack/ovn-northd-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.110400 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/37bce594-0e2b-42f2-affd-892bd457c1b2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"37bce594-0e2b-42f2-affd-892bd457c1b2\") " pod="openstack/ovn-northd-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.110429 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/37bce594-0e2b-42f2-affd-892bd457c1b2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"37bce594-0e2b-42f2-affd-892bd457c1b2\") " pod="openstack/ovn-northd-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.110455 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/37bce594-0e2b-42f2-affd-892bd457c1b2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"37bce594-0e2b-42f2-affd-892bd457c1b2\") " pod="openstack/ovn-northd-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.110480 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thnnq\" (UniqueName: \"kubernetes.io/projected/37bce594-0e2b-42f2-affd-892bd457c1b2-kube-api-access-thnnq\") pod \"ovn-northd-0\" (UID: \"37bce594-0e2b-42f2-affd-892bd457c1b2\") " pod="openstack/ovn-northd-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.111568 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-f6wtx" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.125866 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-3d74-account-create-update-kz9kx" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.213405 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/37bce594-0e2b-42f2-affd-892bd457c1b2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"37bce594-0e2b-42f2-affd-892bd457c1b2\") " pod="openstack/ovn-northd-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.213474 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/37bce594-0e2b-42f2-affd-892bd457c1b2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"37bce594-0e2b-42f2-affd-892bd457c1b2\") " pod="openstack/ovn-northd-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.213509 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/37bce594-0e2b-42f2-affd-892bd457c1b2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"37bce594-0e2b-42f2-affd-892bd457c1b2\") " pod="openstack/ovn-northd-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.213552 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thnnq\" (UniqueName: \"kubernetes.io/projected/37bce594-0e2b-42f2-affd-892bd457c1b2-kube-api-access-thnnq\") pod \"ovn-northd-0\" (UID: \"37bce594-0e2b-42f2-affd-892bd457c1b2\") " pod="openstack/ovn-northd-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.213629 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/37bce594-0e2b-42f2-affd-892bd457c1b2-scripts\") pod \"ovn-northd-0\" (UID: \"37bce594-0e2b-42f2-affd-892bd457c1b2\") " pod="openstack/ovn-northd-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.213656 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37bce594-0e2b-42f2-affd-892bd457c1b2-config\") pod \"ovn-northd-0\" (UID: \"37bce594-0e2b-42f2-affd-892bd457c1b2\") " pod="openstack/ovn-northd-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.213901 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37bce594-0e2b-42f2-affd-892bd457c1b2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"37bce594-0e2b-42f2-affd-892bd457c1b2\") " pod="openstack/ovn-northd-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.217026 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/37bce594-0e2b-42f2-affd-892bd457c1b2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"37bce594-0e2b-42f2-affd-892bd457c1b2\") " pod="openstack/ovn-northd-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.217012 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/37bce594-0e2b-42f2-affd-892bd457c1b2-scripts\") pod \"ovn-northd-0\" (UID: \"37bce594-0e2b-42f2-affd-892bd457c1b2\") " pod="openstack/ovn-northd-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.217629 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37bce594-0e2b-42f2-affd-892bd457c1b2-config\") pod \"ovn-northd-0\" (UID: \"37bce594-0e2b-42f2-affd-892bd457c1b2\") " pod="openstack/ovn-northd-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.221866 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37bce594-0e2b-42f2-affd-892bd457c1b2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"37bce594-0e2b-42f2-affd-892bd457c1b2\") " pod="openstack/ovn-northd-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.225213 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/37bce594-0e2b-42f2-affd-892bd457c1b2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"37bce594-0e2b-42f2-affd-892bd457c1b2\") " pod="openstack/ovn-northd-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.226507 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/37bce594-0e2b-42f2-affd-892bd457c1b2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"37bce594-0e2b-42f2-affd-892bd457c1b2\") " pod="openstack/ovn-northd-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.246090 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thnnq\" (UniqueName: \"kubernetes.io/projected/37bce594-0e2b-42f2-affd-892bd457c1b2-kube-api-access-thnnq\") pod \"ovn-northd-0\" (UID: \"37bce594-0e2b-42f2-affd-892bd457c1b2\") " pod="openstack/ovn-northd-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.324853 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68b6cd6f45-wj5b8"] Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.342204 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 06 06:44:30 crc kubenswrapper[4823]: W1206 06:44:30.347746 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeabecdc3_1a42_4340_9bf7_6fccb70224b3.slice/crio-542e734bf9035713231aa65d9dc1e598ad6acb2db32b6ea8a56d2b6b65976cd4 WatchSource:0}: Error finding container 542e734bf9035713231aa65d9dc1e598ad6acb2db32b6ea8a56d2b6b65976cd4: Status 404 returned error can't find the container with id 542e734bf9035713231aa65d9dc1e598ad6acb2db32b6ea8a56d2b6b65976cd4 Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.424455 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" event={"ID":"eabecdc3-1a42-4340-9bf7-6fccb70224b3","Type":"ContainerStarted","Data":"542e734bf9035713231aa65d9dc1e598ad6acb2db32b6ea8a56d2b6b65976cd4"} Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.606939 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-3d74-account-create-update-kz9kx"] Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.617686 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.644040 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.644255 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.650803 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.651468 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-b765j" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.651731 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.651944 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.723778 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-f6wtx"] Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.724593 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-etc-swift\") pod \"swift-storage-0\" (UID: \"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5\") " pod="openstack/swift-storage-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.724738 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-lock\") pod \"swift-storage-0\" (UID: \"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5\") " pod="openstack/swift-storage-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.724784 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5\") " pod="openstack/swift-storage-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.724811 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-444fh\" (UniqueName: \"kubernetes.io/projected/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-kube-api-access-444fh\") pod \"swift-storage-0\" (UID: \"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5\") " pod="openstack/swift-storage-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.724853 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-cache\") pod \"swift-storage-0\" (UID: \"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5\") " pod="openstack/swift-storage-0" Dec 06 06:44:30 crc kubenswrapper[4823]: W1206 06:44:30.735912 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef7b8301_6cf7_4fbf_8968_e26088f8b144.slice/crio-a0659cd778d2c5e4d1904188a241718352bded704e02f211f8711742ed5fe7f5 WatchSource:0}: Error finding container a0659cd778d2c5e4d1904188a241718352bded704e02f211f8711742ed5fe7f5: Status 404 returned error can't find the container with id a0659cd778d2c5e4d1904188a241718352bded704e02f211f8711742ed5fe7f5 Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.827048 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-lock\") pod \"swift-storage-0\" (UID: \"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5\") " pod="openstack/swift-storage-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.827104 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5\") " pod="openstack/swift-storage-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.827129 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-444fh\" (UniqueName: \"kubernetes.io/projected/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-kube-api-access-444fh\") pod \"swift-storage-0\" (UID: \"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5\") " pod="openstack/swift-storage-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.827160 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-cache\") pod \"swift-storage-0\" (UID: \"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5\") " pod="openstack/swift-storage-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.827245 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-etc-swift\") pod \"swift-storage-0\" (UID: \"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5\") " pod="openstack/swift-storage-0" Dec 06 06:44:30 crc kubenswrapper[4823]: E1206 06:44:30.827407 4823 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 06 06:44:30 crc kubenswrapper[4823]: E1206 06:44:30.827421 4823 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 06 06:44:30 crc kubenswrapper[4823]: E1206 06:44:30.827463 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-etc-swift podName:df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5 nodeName:}" failed. No retries permitted until 2025-12-06 06:44:31.327447539 +0000 UTC m=+1172.613199499 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-etc-swift") pod "swift-storage-0" (UID: "df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5") : configmap "swift-ring-files" not found Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.828293 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.828557 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-cache\") pod \"swift-storage-0\" (UID: \"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5\") " pod="openstack/swift-storage-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.828978 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-lock\") pod \"swift-storage-0\" (UID: \"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5\") " pod="openstack/swift-storage-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.875458 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-444fh\" (UniqueName: \"kubernetes.io/projected/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-kube-api-access-444fh\") pod \"swift-storage-0\" (UID: \"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5\") " pod="openstack/swift-storage-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.922239 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5\") " pod="openstack/swift-storage-0" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.933536 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-skpkp"] Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.936833 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-skpkp" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.944609 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.944930 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.945092 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.957472 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-skpkp"] Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.986805 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-rl26v"] Dec 06 06:44:30 crc kubenswrapper[4823]: I1206 06:44:30.988540 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rl26v" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.002046 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-skpkp"] Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.012572 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-rl26v"] Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.035081 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-combined-ca-bundle\") pod \"swift-ring-rebalance-skpkp\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " pod="openstack/swift-ring-rebalance-skpkp" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.035153 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-dispersionconf\") pod \"swift-ring-rebalance-skpkp\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " pod="openstack/swift-ring-rebalance-skpkp" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.035218 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-scripts\") pod \"swift-ring-rebalance-skpkp\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " pod="openstack/swift-ring-rebalance-skpkp" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.035248 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-etc-swift\") pod \"swift-ring-rebalance-skpkp\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " pod="openstack/swift-ring-rebalance-skpkp" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.035310 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhm85\" (UniqueName: \"kubernetes.io/projected/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-kube-api-access-lhm85\") pod \"swift-ring-rebalance-skpkp\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " pod="openstack/swift-ring-rebalance-skpkp" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.035357 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-swiftconf\") pod \"swift-ring-rebalance-skpkp\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " pod="openstack/swift-ring-rebalance-skpkp" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.035495 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-ring-data-devices\") pod \"swift-ring-rebalance-skpkp\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " pod="openstack/swift-ring-rebalance-skpkp" Dec 06 06:44:31 crc kubenswrapper[4823]: E1206 06:44:31.061339 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-lhm85 ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-skpkp" podUID="ba0513fb-93cc-4081-a178-0bbf3ec35f6c" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.138291 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a539f115-9bb8-4282-9f99-c198920d4bb9-dispersionconf\") pod \"swift-ring-rebalance-rl26v\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " pod="openstack/swift-ring-rebalance-rl26v" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.138364 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a539f115-9bb8-4282-9f99-c198920d4bb9-scripts\") pod \"swift-ring-rebalance-rl26v\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " pod="openstack/swift-ring-rebalance-rl26v" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.138404 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-swiftconf\") pod \"swift-ring-rebalance-skpkp\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " pod="openstack/swift-ring-rebalance-skpkp" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.138464 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a539f115-9bb8-4282-9f99-c198920d4bb9-ring-data-devices\") pod \"swift-ring-rebalance-rl26v\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " pod="openstack/swift-ring-rebalance-rl26v" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.138525 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-ring-data-devices\") pod \"swift-ring-rebalance-skpkp\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " pod="openstack/swift-ring-rebalance-skpkp" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.138585 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-combined-ca-bundle\") pod \"swift-ring-rebalance-skpkp\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " pod="openstack/swift-ring-rebalance-skpkp" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.138610 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpj7m\" (UniqueName: \"kubernetes.io/projected/a539f115-9bb8-4282-9f99-c198920d4bb9-kube-api-access-jpj7m\") pod \"swift-ring-rebalance-rl26v\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " pod="openstack/swift-ring-rebalance-rl26v" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.138639 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-dispersionconf\") pod \"swift-ring-rebalance-skpkp\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " pod="openstack/swift-ring-rebalance-skpkp" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.138713 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a539f115-9bb8-4282-9f99-c198920d4bb9-etc-swift\") pod \"swift-ring-rebalance-rl26v\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " pod="openstack/swift-ring-rebalance-rl26v" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.138759 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-scripts\") pod \"swift-ring-rebalance-skpkp\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " pod="openstack/swift-ring-rebalance-skpkp" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.138792 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-etc-swift\") pod \"swift-ring-rebalance-skpkp\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " pod="openstack/swift-ring-rebalance-skpkp" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.138829 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a539f115-9bb8-4282-9f99-c198920d4bb9-combined-ca-bundle\") pod \"swift-ring-rebalance-rl26v\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " pod="openstack/swift-ring-rebalance-rl26v" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.138883 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a539f115-9bb8-4282-9f99-c198920d4bb9-swiftconf\") pod \"swift-ring-rebalance-rl26v\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " pod="openstack/swift-ring-rebalance-rl26v" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.138924 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhm85\" (UniqueName: \"kubernetes.io/projected/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-kube-api-access-lhm85\") pod \"swift-ring-rebalance-skpkp\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " pod="openstack/swift-ring-rebalance-skpkp" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.143063 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-ring-data-devices\") pod \"swift-ring-rebalance-skpkp\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " pod="openstack/swift-ring-rebalance-skpkp" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.143734 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-etc-swift\") pod \"swift-ring-rebalance-skpkp\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " pod="openstack/swift-ring-rebalance-skpkp" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.149181 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-scripts\") pod \"swift-ring-rebalance-skpkp\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " pod="openstack/swift-ring-rebalance-skpkp" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.155181 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-dispersionconf\") pod \"swift-ring-rebalance-skpkp\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " pod="openstack/swift-ring-rebalance-skpkp" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.158602 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-combined-ca-bundle\") pod \"swift-ring-rebalance-skpkp\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " pod="openstack/swift-ring-rebalance-skpkp" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.165312 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-13a0-account-create-update-rnzvp" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.165337 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-swiftconf\") pod \"swift-ring-rebalance-skpkp\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " pod="openstack/swift-ring-rebalance-skpkp" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.166499 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhm85\" (UniqueName: \"kubernetes.io/projected/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-kube-api-access-lhm85\") pod \"swift-ring-rebalance-skpkp\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " pod="openstack/swift-ring-rebalance-skpkp" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.241083 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9lxg\" (UniqueName: \"kubernetes.io/projected/6b2687bc-8979-48a0-8a02-8e6cd5f62b0b-kube-api-access-v9lxg\") pod \"6b2687bc-8979-48a0-8a02-8e6cd5f62b0b\" (UID: \"6b2687bc-8979-48a0-8a02-8e6cd5f62b0b\") " Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.241328 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b2687bc-8979-48a0-8a02-8e6cd5f62b0b-operator-scripts\") pod \"6b2687bc-8979-48a0-8a02-8e6cd5f62b0b\" (UID: \"6b2687bc-8979-48a0-8a02-8e6cd5f62b0b\") " Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.241680 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpj7m\" (UniqueName: \"kubernetes.io/projected/a539f115-9bb8-4282-9f99-c198920d4bb9-kube-api-access-jpj7m\") pod \"swift-ring-rebalance-rl26v\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " pod="openstack/swift-ring-rebalance-rl26v" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.241802 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a539f115-9bb8-4282-9f99-c198920d4bb9-etc-swift\") pod \"swift-ring-rebalance-rl26v\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " pod="openstack/swift-ring-rebalance-rl26v" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.241876 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a539f115-9bb8-4282-9f99-c198920d4bb9-combined-ca-bundle\") pod \"swift-ring-rebalance-rl26v\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " pod="openstack/swift-ring-rebalance-rl26v" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.241939 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a539f115-9bb8-4282-9f99-c198920d4bb9-swiftconf\") pod \"swift-ring-rebalance-rl26v\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " pod="openstack/swift-ring-rebalance-rl26v" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.242037 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a539f115-9bb8-4282-9f99-c198920d4bb9-dispersionconf\") pod \"swift-ring-rebalance-rl26v\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " pod="openstack/swift-ring-rebalance-rl26v" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.242071 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a539f115-9bb8-4282-9f99-c198920d4bb9-scripts\") pod \"swift-ring-rebalance-rl26v\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " pod="openstack/swift-ring-rebalance-rl26v" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.242429 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a539f115-9bb8-4282-9f99-c198920d4bb9-ring-data-devices\") pod \"swift-ring-rebalance-rl26v\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " pod="openstack/swift-ring-rebalance-rl26v" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.243454 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a539f115-9bb8-4282-9f99-c198920d4bb9-ring-data-devices\") pod \"swift-ring-rebalance-rl26v\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " pod="openstack/swift-ring-rebalance-rl26v" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.246433 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a539f115-9bb8-4282-9f99-c198920d4bb9-etc-swift\") pod \"swift-ring-rebalance-rl26v\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " pod="openstack/swift-ring-rebalance-rl26v" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.247263 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b2687bc-8979-48a0-8a02-8e6cd5f62b0b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b2687bc-8979-48a0-8a02-8e6cd5f62b0b" (UID: "6b2687bc-8979-48a0-8a02-8e6cd5f62b0b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.247783 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a539f115-9bb8-4282-9f99-c198920d4bb9-scripts\") pod \"swift-ring-rebalance-rl26v\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " pod="openstack/swift-ring-rebalance-rl26v" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.255617 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a539f115-9bb8-4282-9f99-c198920d4bb9-combined-ca-bundle\") pod \"swift-ring-rebalance-rl26v\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " pod="openstack/swift-ring-rebalance-rl26v" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.257799 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a539f115-9bb8-4282-9f99-c198920d4bb9-dispersionconf\") pod \"swift-ring-rebalance-rl26v\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " pod="openstack/swift-ring-rebalance-rl26v" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.261065 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b2687bc-8979-48a0-8a02-8e6cd5f62b0b-kube-api-access-v9lxg" (OuterVolumeSpecName: "kube-api-access-v9lxg") pod "6b2687bc-8979-48a0-8a02-8e6cd5f62b0b" (UID: "6b2687bc-8979-48a0-8a02-8e6cd5f62b0b"). InnerVolumeSpecName "kube-api-access-v9lxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.263655 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a539f115-9bb8-4282-9f99-c198920d4bb9-swiftconf\") pod \"swift-ring-rebalance-rl26v\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " pod="openstack/swift-ring-rebalance-rl26v" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.275501 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpj7m\" (UniqueName: \"kubernetes.io/projected/a539f115-9bb8-4282-9f99-c198920d4bb9-kube-api-access-jpj7m\") pod \"swift-ring-rebalance-rl26v\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " pod="openstack/swift-ring-rebalance-rl26v" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.343903 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-etc-swift\") pod \"swift-storage-0\" (UID: \"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5\") " pod="openstack/swift-storage-0" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.344269 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9lxg\" (UniqueName: \"kubernetes.io/projected/6b2687bc-8979-48a0-8a02-8e6cd5f62b0b-kube-api-access-v9lxg\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.344287 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b2687bc-8979-48a0-8a02-8e6cd5f62b0b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:31 crc kubenswrapper[4823]: E1206 06:44:31.344095 4823 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 06 06:44:31 crc kubenswrapper[4823]: E1206 06:44:31.344311 4823 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 06 06:44:31 crc kubenswrapper[4823]: E1206 06:44:31.344363 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-etc-swift podName:df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5 nodeName:}" failed. No retries permitted until 2025-12-06 06:44:32.344344078 +0000 UTC m=+1173.630096038 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-etc-swift") pod "swift-storage-0" (UID: "df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5") : configmap "swift-ring-files" not found Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.346813 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.444896 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rl26v" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.454428 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-13a0-account-create-update-rnzvp" event={"ID":"6b2687bc-8979-48a0-8a02-8e6cd5f62b0b","Type":"ContainerDied","Data":"7c0dd10a49b128396fe9e5c350eda0f40b7d39efee1bd776465d9e3ac2002d22"} Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.454487 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c0dd10a49b128396fe9e5c350eda0f40b7d39efee1bd776465d9e3ac2002d22" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.454441 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-13a0-account-create-update-rnzvp" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.487988 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-f6wtx" event={"ID":"ef7b8301-6cf7-4fbf-8968-e26088f8b144","Type":"ContainerStarted","Data":"b2b15653f86c5efa5017573c825e3bbff3e0b2078dd04d58c0ff800a5de4171d"} Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.488051 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-f6wtx" event={"ID":"ef7b8301-6cf7-4fbf-8968-e26088f8b144","Type":"ContainerStarted","Data":"a0659cd778d2c5e4d1904188a241718352bded704e02f211f8711742ed5fe7f5"} Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.501996 4823 generic.go:334] "Generic (PLEG): container finished" podID="eabecdc3-1a42-4340-9bf7-6fccb70224b3" containerID="a8419210f1976208c4cdb29e6cc83e1addcb0067fa10753e6272c971775a3be0" exitCode=0 Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.502096 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" event={"ID":"eabecdc3-1a42-4340-9bf7-6fccb70224b3","Type":"ContainerDied","Data":"a8419210f1976208c4cdb29e6cc83e1addcb0067fa10753e6272c971775a3be0"} Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.515140 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-skpkp" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.516948 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-3d74-account-create-update-kz9kx" event={"ID":"b2f3c4a2-8111-4fcc-a387-e91f191804e8","Type":"ContainerStarted","Data":"47f5076437a15a6d4897a3eb70180aab41c23429617ddc77a4bfe855ec3009db"} Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.516996 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-3d74-account-create-update-kz9kx" event={"ID":"b2f3c4a2-8111-4fcc-a387-e91f191804e8","Type":"ContainerStarted","Data":"d8791b90142fd906fc27383209d4bdfcd0cef8181c1fb2b3af527fb3cc7edb92"} Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.532644 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-create-f6wtx" podStartSLOduration=2.532616757 podStartE2EDuration="2.532616757s" podCreationTimestamp="2025-12-06 06:44:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:44:31.512627549 +0000 UTC m=+1172.798379509" watchObservedRunningTime="2025-12-06 06:44:31.532616757 +0000 UTC m=+1172.818368727" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.537178 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-skpkp" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.580089 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-3d74-account-create-update-kz9kx" podStartSLOduration=2.580061841 podStartE2EDuration="2.580061841s" podCreationTimestamp="2025-12-06 06:44:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:44:31.573031187 +0000 UTC m=+1172.858783147" watchObservedRunningTime="2025-12-06 06:44:31.580061841 +0000 UTC m=+1172.865813801" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.651936 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-ring-data-devices\") pod \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.652068 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-scripts\") pod \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.652130 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-dispersionconf\") pod \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.652165 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-etc-swift\") pod \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.652264 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhm85\" (UniqueName: \"kubernetes.io/projected/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-kube-api-access-lhm85\") pod \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.652316 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-swiftconf\") pod \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.652347 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-combined-ca-bundle\") pod \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\" (UID: \"ba0513fb-93cc-4081-a178-0bbf3ec35f6c\") " Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.652616 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "ba0513fb-93cc-4081-a178-0bbf3ec35f6c" (UID: "ba0513fb-93cc-4081-a178-0bbf3ec35f6c"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.652845 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "ba0513fb-93cc-4081-a178-0bbf3ec35f6c" (UID: "ba0513fb-93cc-4081-a178-0bbf3ec35f6c"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.653284 4823 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.653312 4823 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.653909 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-scripts" (OuterVolumeSpecName: "scripts") pod "ba0513fb-93cc-4081-a178-0bbf3ec35f6c" (UID: "ba0513fb-93cc-4081-a178-0bbf3ec35f6c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.658339 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-kube-api-access-lhm85" (OuterVolumeSpecName: "kube-api-access-lhm85") pod "ba0513fb-93cc-4081-a178-0bbf3ec35f6c" (UID: "ba0513fb-93cc-4081-a178-0bbf3ec35f6c"). InnerVolumeSpecName "kube-api-access-lhm85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.658387 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "ba0513fb-93cc-4081-a178-0bbf3ec35f6c" (UID: "ba0513fb-93cc-4081-a178-0bbf3ec35f6c"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.660842 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba0513fb-93cc-4081-a178-0bbf3ec35f6c" (UID: "ba0513fb-93cc-4081-a178-0bbf3ec35f6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.660977 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "ba0513fb-93cc-4081-a178-0bbf3ec35f6c" (UID: "ba0513fb-93cc-4081-a178-0bbf3ec35f6c"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.755072 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.755108 4823 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.755121 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhm85\" (UniqueName: \"kubernetes.io/projected/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-kube-api-access-lhm85\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.755130 4823 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:31 crc kubenswrapper[4823]: I1206 06:44:31.755139 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba0513fb-93cc-4081-a178-0bbf3ec35f6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:32 crc kubenswrapper[4823]: I1206 06:44:32.367063 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-etc-swift\") pod \"swift-storage-0\" (UID: \"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5\") " pod="openstack/swift-storage-0" Dec 06 06:44:32 crc kubenswrapper[4823]: E1206 06:44:32.367342 4823 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 06 06:44:32 crc kubenswrapper[4823]: E1206 06:44:32.367395 4823 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 06 06:44:32 crc kubenswrapper[4823]: E1206 06:44:32.367503 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-etc-swift podName:df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5 nodeName:}" failed. No retries permitted until 2025-12-06 06:44:34.36747814 +0000 UTC m=+1175.653230100 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-etc-swift") pod "swift-storage-0" (UID: "df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5") : configmap "swift-ring-files" not found Dec 06 06:44:32 crc kubenswrapper[4823]: I1206 06:44:32.532867 4823 generic.go:334] "Generic (PLEG): container finished" podID="b2f3c4a2-8111-4fcc-a387-e91f191804e8" containerID="47f5076437a15a6d4897a3eb70180aab41c23429617ddc77a4bfe855ec3009db" exitCode=0 Dec 06 06:44:32 crc kubenswrapper[4823]: I1206 06:44:32.532956 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-3d74-account-create-update-kz9kx" event={"ID":"b2f3c4a2-8111-4fcc-a387-e91f191804e8","Type":"ContainerDied","Data":"47f5076437a15a6d4897a3eb70180aab41c23429617ddc77a4bfe855ec3009db"} Dec 06 06:44:32 crc kubenswrapper[4823]: I1206 06:44:32.541299 4823 generic.go:334] "Generic (PLEG): container finished" podID="ef7b8301-6cf7-4fbf-8968-e26088f8b144" containerID="b2b15653f86c5efa5017573c825e3bbff3e0b2078dd04d58c0ff800a5de4171d" exitCode=0 Dec 06 06:44:32 crc kubenswrapper[4823]: I1206 06:44:32.541425 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-skpkp" Dec 06 06:44:32 crc kubenswrapper[4823]: I1206 06:44:32.541417 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-f6wtx" event={"ID":"ef7b8301-6cf7-4fbf-8968-e26088f8b144","Type":"ContainerDied","Data":"b2b15653f86c5efa5017573c825e3bbff3e0b2078dd04d58c0ff800a5de4171d"} Dec 06 06:44:32 crc kubenswrapper[4823]: I1206 06:44:32.609349 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-skpkp"] Dec 06 06:44:32 crc kubenswrapper[4823]: I1206 06:44:32.618135 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-skpkp"] Dec 06 06:44:33 crc kubenswrapper[4823]: I1206 06:44:33.152310 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba0513fb-93cc-4081-a178-0bbf3ec35f6c" path="/var/lib/kubelet/pods/ba0513fb-93cc-4081-a178-0bbf3ec35f6c/volumes" Dec 06 06:44:34 crc kubenswrapper[4823]: I1206 06:44:34.405281 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-etc-swift\") pod \"swift-storage-0\" (UID: \"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5\") " pod="openstack/swift-storage-0" Dec 06 06:44:34 crc kubenswrapper[4823]: E1206 06:44:34.405610 4823 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 06 06:44:34 crc kubenswrapper[4823]: E1206 06:44:34.405648 4823 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 06 06:44:34 crc kubenswrapper[4823]: E1206 06:44:34.405729 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-etc-swift podName:df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5 nodeName:}" failed. No retries permitted until 2025-12-06 06:44:38.405707102 +0000 UTC m=+1179.691459062 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-etc-swift") pod "swift-storage-0" (UID: "df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5") : configmap "swift-ring-files" not found Dec 06 06:44:36 crc kubenswrapper[4823]: I1206 06:44:36.052205 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:44:36 crc kubenswrapper[4823]: I1206 06:44:36.052485 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:44:38 crc kubenswrapper[4823]: I1206 06:44:38.414180 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-etc-swift\") pod \"swift-storage-0\" (UID: \"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5\") " pod="openstack/swift-storage-0" Dec 06 06:44:38 crc kubenswrapper[4823]: E1206 06:44:38.414509 4823 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 06 06:44:38 crc kubenswrapper[4823]: E1206 06:44:38.414652 4823 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 06 06:44:38 crc kubenswrapper[4823]: E1206 06:44:38.414732 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-etc-swift podName:df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5 nodeName:}" failed. No retries permitted until 2025-12-06 06:44:46.414713202 +0000 UTC m=+1187.700465162 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-etc-swift") pod "swift-storage-0" (UID: "df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5") : configmap "swift-ring-files" not found Dec 06 06:44:38 crc kubenswrapper[4823]: I1206 06:44:38.618885 4823 patch_prober.go:28] interesting pod/router-default-5444994796-4rlt6 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 06:44:38 crc kubenswrapper[4823]: I1206 06:44:38.618983 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-4rlt6" podUID="6434f84a-f26e-4bf8-8ea9-cbc987ad0b1e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 06:44:43 crc kubenswrapper[4823]: W1206 06:44:43.482439 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37bce594_0e2b_42f2_affd_892bd457c1b2.slice/crio-48f449a1b39b58359f4777da412e8c8216d5bab1ad5d107d57f67f74873a70b7 WatchSource:0}: Error finding container 48f449a1b39b58359f4777da412e8c8216d5bab1ad5d107d57f67f74873a70b7: Status 404 returned error can't find the container with id 48f449a1b39b58359f4777da412e8c8216d5bab1ad5d107d57f67f74873a70b7 Dec 06 06:44:43 crc kubenswrapper[4823]: E1206 06:44:43.503895 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:d972f4faa5e9c121402d23ed85002f26af48ec36b1b71a7489d677b3913d08b4" Dec 06 06:44:43 crc kubenswrapper[4823]: E1206 06:44:43.504076 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:thanos-sidecar,Image:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:d972f4faa5e9c121402d23ed85002f26af48ec36b1b71a7489d677b3913d08b4,Command:[],Args:[sidecar --prometheus.url=http://localhost:9090/ --grpc-address=:10901 --http-address=:10902 --log.level=info --prometheus.http-client-file=/etc/thanos/config/prometheus.http-client-file.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:10902,Protocol:TCP,HostIP:,},ContainerPort{Name:grpc,HostPort:0,ContainerPort:10901,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:thanos-prometheus-http-client-file,ReadOnly:false,MountPath:/etc/thanos/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-phjxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(d40b985e-9817-453a-8a4b-72d7eadf4683): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 06:44:43 crc kubenswrapper[4823]: E1206 06:44:43.505276 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="d40b985e-9817-453a-8a4b-72d7eadf4683" Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.672312 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-f6wtx" event={"ID":"ef7b8301-6cf7-4fbf-8968-e26088f8b144","Type":"ContainerDied","Data":"a0659cd778d2c5e4d1904188a241718352bded704e02f211f8711742ed5fe7f5"} Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.672700 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0659cd778d2c5e4d1904188a241718352bded704e02f211f8711742ed5fe7f5" Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.677179 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-3d74-account-create-update-kz9kx" event={"ID":"b2f3c4a2-8111-4fcc-a387-e91f191804e8","Type":"ContainerDied","Data":"d8791b90142fd906fc27383209d4bdfcd0cef8181c1fb2b3af527fb3cc7edb92"} Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.677221 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8791b90142fd906fc27383209d4bdfcd0cef8181c1fb2b3af527fb3cc7edb92" Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.684503 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xlp49" event={"ID":"1333c4a8-9d88-4ba6-b00c-22b790673422","Type":"ContainerDied","Data":"1ff4de22b0c44bdd313e3b62c978ef55802bbff9620aef577fc0f53e9eaed798"} Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.684557 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ff4de22b0c44bdd313e3b62c978ef55802bbff9620aef577fc0f53e9eaed798" Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.695296 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3a9a-account-create-update-ckvgt" event={"ID":"09e414a4-1aba-4797-a384-ed802cb06e0c","Type":"ContainerDied","Data":"0fe0421e847b77fc793fddf6bf475e1ba86bfc2ba8d6a7b1617693c25674e7a4"} Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.695381 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fe0421e847b77fc793fddf6bf475e1ba86bfc2ba8d6a7b1617693c25674e7a4" Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.696869 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"37bce594-0e2b-42f2-affd-892bd457c1b2","Type":"ContainerStarted","Data":"48f449a1b39b58359f4777da412e8c8216d5bab1ad5d107d57f67f74873a70b7"} Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.698780 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-cf22j" event={"ID":"9da3e65f-fbe0-4373-a601-8408a5f4f033","Type":"ContainerDied","Data":"6f6595e919090b8f054a5d185347eb52bb74dd895609424737860220556aba4e"} Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.698833 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f6595e919090b8f054a5d185347eb52bb74dd895609424737860220556aba4e" Dec 06 06:44:43 crc kubenswrapper[4823]: E1206 06:44:43.701068 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:d972f4faa5e9c121402d23ed85002f26af48ec36b1b71a7489d677b3913d08b4\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="d40b985e-9817-453a-8a4b-72d7eadf4683" Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.729641 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-f6wtx" Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.753783 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-3d74-account-create-update-kz9kx" Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.766224 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xlp49" Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.777633 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cf22j" Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.792818 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3a9a-account-create-update-ckvgt" Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.815405 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef7b8301-6cf7-4fbf-8968-e26088f8b144-operator-scripts\") pod \"ef7b8301-6cf7-4fbf-8968-e26088f8b144\" (UID: \"ef7b8301-6cf7-4fbf-8968-e26088f8b144\") " Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.815725 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l4s4\" (UniqueName: \"kubernetes.io/projected/ef7b8301-6cf7-4fbf-8968-e26088f8b144-kube-api-access-8l4s4\") pod \"ef7b8301-6cf7-4fbf-8968-e26088f8b144\" (UID: \"ef7b8301-6cf7-4fbf-8968-e26088f8b144\") " Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.816841 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef7b8301-6cf7-4fbf-8968-e26088f8b144-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ef7b8301-6cf7-4fbf-8968-e26088f8b144" (UID: "ef7b8301-6cf7-4fbf-8968-e26088f8b144"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.823317 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef7b8301-6cf7-4fbf-8968-e26088f8b144-kube-api-access-8l4s4" (OuterVolumeSpecName: "kube-api-access-8l4s4") pod "ef7b8301-6cf7-4fbf-8968-e26088f8b144" (UID: "ef7b8301-6cf7-4fbf-8968-e26088f8b144"). InnerVolumeSpecName "kube-api-access-8l4s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.917651 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4tzk\" (UniqueName: \"kubernetes.io/projected/09e414a4-1aba-4797-a384-ed802cb06e0c-kube-api-access-z4tzk\") pod \"09e414a4-1aba-4797-a384-ed802cb06e0c\" (UID: \"09e414a4-1aba-4797-a384-ed802cb06e0c\") " Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.918077 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1333c4a8-9d88-4ba6-b00c-22b790673422-operator-scripts\") pod \"1333c4a8-9d88-4ba6-b00c-22b790673422\" (UID: \"1333c4a8-9d88-4ba6-b00c-22b790673422\") " Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.918122 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09e414a4-1aba-4797-a384-ed802cb06e0c-operator-scripts\") pod \"09e414a4-1aba-4797-a384-ed802cb06e0c\" (UID: \"09e414a4-1aba-4797-a384-ed802cb06e0c\") " Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.918227 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9da3e65f-fbe0-4373-a601-8408a5f4f033-operator-scripts\") pod \"9da3e65f-fbe0-4373-a601-8408a5f4f033\" (UID: \"9da3e65f-fbe0-4373-a601-8408a5f4f033\") " Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.918268 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2cpd\" (UniqueName: \"kubernetes.io/projected/1333c4a8-9d88-4ba6-b00c-22b790673422-kube-api-access-t2cpd\") pod \"1333c4a8-9d88-4ba6-b00c-22b790673422\" (UID: \"1333c4a8-9d88-4ba6-b00c-22b790673422\") " Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.918326 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xkh9\" (UniqueName: \"kubernetes.io/projected/b2f3c4a2-8111-4fcc-a387-e91f191804e8-kube-api-access-9xkh9\") pod \"b2f3c4a2-8111-4fcc-a387-e91f191804e8\" (UID: \"b2f3c4a2-8111-4fcc-a387-e91f191804e8\") " Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.918432 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2f3c4a2-8111-4fcc-a387-e91f191804e8-operator-scripts\") pod \"b2f3c4a2-8111-4fcc-a387-e91f191804e8\" (UID: \"b2f3c4a2-8111-4fcc-a387-e91f191804e8\") " Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.918452 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qh2fb\" (UniqueName: \"kubernetes.io/projected/9da3e65f-fbe0-4373-a601-8408a5f4f033-kube-api-access-qh2fb\") pod \"9da3e65f-fbe0-4373-a601-8408a5f4f033\" (UID: \"9da3e65f-fbe0-4373-a601-8408a5f4f033\") " Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.918737 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1333c4a8-9d88-4ba6-b00c-22b790673422-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1333c4a8-9d88-4ba6-b00c-22b790673422" (UID: "1333c4a8-9d88-4ba6-b00c-22b790673422"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.918816 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09e414a4-1aba-4797-a384-ed802cb06e0c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "09e414a4-1aba-4797-a384-ed802cb06e0c" (UID: "09e414a4-1aba-4797-a384-ed802cb06e0c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.919063 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8l4s4\" (UniqueName: \"kubernetes.io/projected/ef7b8301-6cf7-4fbf-8968-e26088f8b144-kube-api-access-8l4s4\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.919088 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef7b8301-6cf7-4fbf-8968-e26088f8b144-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.919103 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1333c4a8-9d88-4ba6-b00c-22b790673422-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.919115 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09e414a4-1aba-4797-a384-ed802cb06e0c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.919599 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2f3c4a2-8111-4fcc-a387-e91f191804e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b2f3c4a2-8111-4fcc-a387-e91f191804e8" (UID: "b2f3c4a2-8111-4fcc-a387-e91f191804e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.919618 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9da3e65f-fbe0-4373-a601-8408a5f4f033-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9da3e65f-fbe0-4373-a601-8408a5f4f033" (UID: "9da3e65f-fbe0-4373-a601-8408a5f4f033"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.920992 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09e414a4-1aba-4797-a384-ed802cb06e0c-kube-api-access-z4tzk" (OuterVolumeSpecName: "kube-api-access-z4tzk") pod "09e414a4-1aba-4797-a384-ed802cb06e0c" (UID: "09e414a4-1aba-4797-a384-ed802cb06e0c"). InnerVolumeSpecName "kube-api-access-z4tzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.921968 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1333c4a8-9d88-4ba6-b00c-22b790673422-kube-api-access-t2cpd" (OuterVolumeSpecName: "kube-api-access-t2cpd") pod "1333c4a8-9d88-4ba6-b00c-22b790673422" (UID: "1333c4a8-9d88-4ba6-b00c-22b790673422"). InnerVolumeSpecName "kube-api-access-t2cpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.922478 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2f3c4a2-8111-4fcc-a387-e91f191804e8-kube-api-access-9xkh9" (OuterVolumeSpecName: "kube-api-access-9xkh9") pod "b2f3c4a2-8111-4fcc-a387-e91f191804e8" (UID: "b2f3c4a2-8111-4fcc-a387-e91f191804e8"). InnerVolumeSpecName "kube-api-access-9xkh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:44:43 crc kubenswrapper[4823]: I1206 06:44:43.923143 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9da3e65f-fbe0-4373-a601-8408a5f4f033-kube-api-access-qh2fb" (OuterVolumeSpecName: "kube-api-access-qh2fb") pod "9da3e65f-fbe0-4373-a601-8408a5f4f033" (UID: "9da3e65f-fbe0-4373-a601-8408a5f4f033"). InnerVolumeSpecName "kube-api-access-qh2fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:44:44 crc kubenswrapper[4823]: I1206 06:44:44.007418 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-rl26v"] Dec 06 06:44:44 crc kubenswrapper[4823]: I1206 06:44:44.020748 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9da3e65f-fbe0-4373-a601-8408a5f4f033-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:44 crc kubenswrapper[4823]: I1206 06:44:44.020828 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2cpd\" (UniqueName: \"kubernetes.io/projected/1333c4a8-9d88-4ba6-b00c-22b790673422-kube-api-access-t2cpd\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:44 crc kubenswrapper[4823]: I1206 06:44:44.020842 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xkh9\" (UniqueName: \"kubernetes.io/projected/b2f3c4a2-8111-4fcc-a387-e91f191804e8-kube-api-access-9xkh9\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:44 crc kubenswrapper[4823]: I1206 06:44:44.020852 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2f3c4a2-8111-4fcc-a387-e91f191804e8-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:44 crc kubenswrapper[4823]: I1206 06:44:44.020864 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qh2fb\" (UniqueName: \"kubernetes.io/projected/9da3e65f-fbe0-4373-a601-8408a5f4f033-kube-api-access-qh2fb\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:44 crc kubenswrapper[4823]: I1206 06:44:44.020875 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4tzk\" (UniqueName: \"kubernetes.io/projected/09e414a4-1aba-4797-a384-ed802cb06e0c-kube-api-access-z4tzk\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:44 crc kubenswrapper[4823]: W1206 06:44:44.208456 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda539f115_9bb8_4282_9f99_c198920d4bb9.slice/crio-33ba27976bf1b5e21b0d0544f00fc527474c68a0db524cdff937fe74655481e5 WatchSource:0}: Error finding container 33ba27976bf1b5e21b0d0544f00fc527474c68a0db524cdff937fe74655481e5: Status 404 returned error can't find the container with id 33ba27976bf1b5e21b0d0544f00fc527474c68a0db524cdff937fe74655481e5 Dec 06 06:44:44 crc kubenswrapper[4823]: I1206 06:44:44.713885 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"37bce594-0e2b-42f2-affd-892bd457c1b2","Type":"ContainerStarted","Data":"24f6996911a16b330b81cf85d1a6ede9ccc1d05141bde3c98bc6125282d3ebb8"} Dec 06 06:44:44 crc kubenswrapper[4823]: I1206 06:44:44.719115 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" event={"ID":"eabecdc3-1a42-4340-9bf7-6fccb70224b3","Type":"ContainerStarted","Data":"5a97807bff5e3069d6d2d75d09f91f2fbe7f8be3db4517bf690aba8a4f10974a"} Dec 06 06:44:44 crc kubenswrapper[4823]: I1206 06:44:44.719248 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" Dec 06 06:44:44 crc kubenswrapper[4823]: I1206 06:44:44.721044 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-f6wtx" Dec 06 06:44:44 crc kubenswrapper[4823]: I1206 06:44:44.721067 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rl26v" event={"ID":"a539f115-9bb8-4282-9f99-c198920d4bb9","Type":"ContainerStarted","Data":"33ba27976bf1b5e21b0d0544f00fc527474c68a0db524cdff937fe74655481e5"} Dec 06 06:44:44 crc kubenswrapper[4823]: I1206 06:44:44.721100 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-3d74-account-create-update-kz9kx" Dec 06 06:44:44 crc kubenswrapper[4823]: I1206 06:44:44.721151 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xlp49" Dec 06 06:44:44 crc kubenswrapper[4823]: I1206 06:44:44.721175 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cf22j" Dec 06 06:44:44 crc kubenswrapper[4823]: I1206 06:44:44.721598 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3a9a-account-create-update-ckvgt" Dec 06 06:44:44 crc kubenswrapper[4823]: I1206 06:44:44.751095 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" podStartSLOduration=15.751074643 podStartE2EDuration="15.751074643s" podCreationTimestamp="2025-12-06 06:44:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:44:44.740408024 +0000 UTC m=+1186.026159984" watchObservedRunningTime="2025-12-06 06:44:44.751074643 +0000 UTC m=+1186.036826593" Dec 06 06:44:45 crc kubenswrapper[4823]: I1206 06:44:45.731847 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"37bce594-0e2b-42f2-affd-892bd457c1b2","Type":"ContainerStarted","Data":"a0fbbf74da45b7ebb3e954eddb23db94d7b3a7856a52491a04549859c63c2421"} Dec 06 06:44:45 crc kubenswrapper[4823]: I1206 06:44:45.732426 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 06 06:44:45 crc kubenswrapper[4823]: I1206 06:44:45.765048 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=16.004717662 podStartE2EDuration="16.765024328s" podCreationTimestamp="2025-12-06 06:44:29 +0000 UTC" firstStartedPulling="2025-12-06 06:44:43.49575204 +0000 UTC m=+1184.781504000" lastFinishedPulling="2025-12-06 06:44:44.256058706 +0000 UTC m=+1185.541810666" observedRunningTime="2025-12-06 06:44:45.749276663 +0000 UTC m=+1187.035028623" watchObservedRunningTime="2025-12-06 06:44:45.765024328 +0000 UTC m=+1187.050776288" Dec 06 06:44:46 crc kubenswrapper[4823]: I1206 06:44:46.198522 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 06 06:44:46 crc kubenswrapper[4823]: E1206 06:44:46.203908 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:d972f4faa5e9c121402d23ed85002f26af48ec36b1b71a7489d677b3913d08b4\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="d40b985e-9817-453a-8a4b-72d7eadf4683" Dec 06 06:44:46 crc kubenswrapper[4823]: I1206 06:44:46.499341 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-etc-swift\") pod \"swift-storage-0\" (UID: \"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5\") " pod="openstack/swift-storage-0" Dec 06 06:44:46 crc kubenswrapper[4823]: E1206 06:44:46.499523 4823 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 06 06:44:46 crc kubenswrapper[4823]: E1206 06:44:46.499539 4823 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 06 06:44:46 crc kubenswrapper[4823]: E1206 06:44:46.499597 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-etc-swift podName:df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5 nodeName:}" failed. No retries permitted until 2025-12-06 06:45:02.499575768 +0000 UTC m=+1203.785327728 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-etc-swift") pod "swift-storage-0" (UID: "df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5") : configmap "swift-ring-files" not found Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.219463 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-94t86" podUID="afe6c323-7053-4b9e-af90-27bb99d99ae3" containerName="ovn-controller" probeResult="failure" output=< Dec 06 06:44:48 crc kubenswrapper[4823]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 06 06:44:48 crc kubenswrapper[4823]: > Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.241935 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-s2c88" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.242522 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-s2c88" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.466098 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-94t86-config-48b7l"] Dec 06 06:44:48 crc kubenswrapper[4823]: E1206 06:44:48.466874 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef7b8301-6cf7-4fbf-8968-e26088f8b144" containerName="mariadb-database-create" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.466898 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef7b8301-6cf7-4fbf-8968-e26088f8b144" containerName="mariadb-database-create" Dec 06 06:44:48 crc kubenswrapper[4823]: E1206 06:44:48.466917 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da3e65f-fbe0-4373-a601-8408a5f4f033" containerName="mariadb-database-create" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.466928 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da3e65f-fbe0-4373-a601-8408a5f4f033" containerName="mariadb-database-create" Dec 06 06:44:48 crc kubenswrapper[4823]: E1206 06:44:48.466947 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b2687bc-8979-48a0-8a02-8e6cd5f62b0b" containerName="mariadb-account-create-update" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.466957 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b2687bc-8979-48a0-8a02-8e6cd5f62b0b" containerName="mariadb-account-create-update" Dec 06 06:44:48 crc kubenswrapper[4823]: E1206 06:44:48.466974 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2f3c4a2-8111-4fcc-a387-e91f191804e8" containerName="mariadb-account-create-update" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.466982 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2f3c4a2-8111-4fcc-a387-e91f191804e8" containerName="mariadb-account-create-update" Dec 06 06:44:48 crc kubenswrapper[4823]: E1206 06:44:48.466993 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09e414a4-1aba-4797-a384-ed802cb06e0c" containerName="mariadb-account-create-update" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.467001 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="09e414a4-1aba-4797-a384-ed802cb06e0c" containerName="mariadb-account-create-update" Dec 06 06:44:48 crc kubenswrapper[4823]: E1206 06:44:48.467030 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1333c4a8-9d88-4ba6-b00c-22b790673422" containerName="mariadb-database-create" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.467039 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="1333c4a8-9d88-4ba6-b00c-22b790673422" containerName="mariadb-database-create" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.467259 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b2687bc-8979-48a0-8a02-8e6cd5f62b0b" containerName="mariadb-account-create-update" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.467288 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="1333c4a8-9d88-4ba6-b00c-22b790673422" containerName="mariadb-database-create" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.467303 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="09e414a4-1aba-4797-a384-ed802cb06e0c" containerName="mariadb-account-create-update" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.467317 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2f3c4a2-8111-4fcc-a387-e91f191804e8" containerName="mariadb-account-create-update" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.467334 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9da3e65f-fbe0-4373-a601-8408a5f4f033" containerName="mariadb-database-create" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.467347 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef7b8301-6cf7-4fbf-8968-e26088f8b144" containerName="mariadb-database-create" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.468165 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-94t86-config-48b7l" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.471411 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.477583 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-94t86-config-48b7l"] Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.639947 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6b5a30ac-6042-439e-b5ec-fcd9302b475f-additional-scripts\") pod \"ovn-controller-94t86-config-48b7l\" (UID: \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\") " pod="openstack/ovn-controller-94t86-config-48b7l" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.640020 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6b5a30ac-6042-439e-b5ec-fcd9302b475f-var-log-ovn\") pod \"ovn-controller-94t86-config-48b7l\" (UID: \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\") " pod="openstack/ovn-controller-94t86-config-48b7l" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.640049 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs7vm\" (UniqueName: \"kubernetes.io/projected/6b5a30ac-6042-439e-b5ec-fcd9302b475f-kube-api-access-cs7vm\") pod \"ovn-controller-94t86-config-48b7l\" (UID: \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\") " pod="openstack/ovn-controller-94t86-config-48b7l" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.640098 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6b5a30ac-6042-439e-b5ec-fcd9302b475f-var-run\") pod \"ovn-controller-94t86-config-48b7l\" (UID: \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\") " pod="openstack/ovn-controller-94t86-config-48b7l" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.640284 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b5a30ac-6042-439e-b5ec-fcd9302b475f-scripts\") pod \"ovn-controller-94t86-config-48b7l\" (UID: \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\") " pod="openstack/ovn-controller-94t86-config-48b7l" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.640340 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b5a30ac-6042-439e-b5ec-fcd9302b475f-var-run-ovn\") pod \"ovn-controller-94t86-config-48b7l\" (UID: \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\") " pod="openstack/ovn-controller-94t86-config-48b7l" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.742140 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6b5a30ac-6042-439e-b5ec-fcd9302b475f-var-run\") pod \"ovn-controller-94t86-config-48b7l\" (UID: \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\") " pod="openstack/ovn-controller-94t86-config-48b7l" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.742243 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b5a30ac-6042-439e-b5ec-fcd9302b475f-scripts\") pod \"ovn-controller-94t86-config-48b7l\" (UID: \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\") " pod="openstack/ovn-controller-94t86-config-48b7l" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.742277 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b5a30ac-6042-439e-b5ec-fcd9302b475f-var-run-ovn\") pod \"ovn-controller-94t86-config-48b7l\" (UID: \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\") " pod="openstack/ovn-controller-94t86-config-48b7l" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.742364 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6b5a30ac-6042-439e-b5ec-fcd9302b475f-additional-scripts\") pod \"ovn-controller-94t86-config-48b7l\" (UID: \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\") " pod="openstack/ovn-controller-94t86-config-48b7l" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.742416 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6b5a30ac-6042-439e-b5ec-fcd9302b475f-var-log-ovn\") pod \"ovn-controller-94t86-config-48b7l\" (UID: \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\") " pod="openstack/ovn-controller-94t86-config-48b7l" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.742442 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs7vm\" (UniqueName: \"kubernetes.io/projected/6b5a30ac-6042-439e-b5ec-fcd9302b475f-kube-api-access-cs7vm\") pod \"ovn-controller-94t86-config-48b7l\" (UID: \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\") " pod="openstack/ovn-controller-94t86-config-48b7l" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.742576 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6b5a30ac-6042-439e-b5ec-fcd9302b475f-var-run\") pod \"ovn-controller-94t86-config-48b7l\" (UID: \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\") " pod="openstack/ovn-controller-94t86-config-48b7l" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.742626 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6b5a30ac-6042-439e-b5ec-fcd9302b475f-var-log-ovn\") pod \"ovn-controller-94t86-config-48b7l\" (UID: \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\") " pod="openstack/ovn-controller-94t86-config-48b7l" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.742623 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b5a30ac-6042-439e-b5ec-fcd9302b475f-var-run-ovn\") pod \"ovn-controller-94t86-config-48b7l\" (UID: \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\") " pod="openstack/ovn-controller-94t86-config-48b7l" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.743155 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6b5a30ac-6042-439e-b5ec-fcd9302b475f-additional-scripts\") pod \"ovn-controller-94t86-config-48b7l\" (UID: \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\") " pod="openstack/ovn-controller-94t86-config-48b7l" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.745007 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b5a30ac-6042-439e-b5ec-fcd9302b475f-scripts\") pod \"ovn-controller-94t86-config-48b7l\" (UID: \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\") " pod="openstack/ovn-controller-94t86-config-48b7l" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.766088 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs7vm\" (UniqueName: \"kubernetes.io/projected/6b5a30ac-6042-439e-b5ec-fcd9302b475f-kube-api-access-cs7vm\") pod \"ovn-controller-94t86-config-48b7l\" (UID: \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\") " pod="openstack/ovn-controller-94t86-config-48b7l" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.780438 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rl26v" event={"ID":"a539f115-9bb8-4282-9f99-c198920d4bb9","Type":"ContainerStarted","Data":"7db686db2e30fc413a298f05fd6d014419d8f66d95a52863ba4a68d95459d5b3"} Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.796411 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-94t86-config-48b7l" Dec 06 06:44:48 crc kubenswrapper[4823]: I1206 06:44:48.816037 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-rl26v" podStartSLOduration=14.752846275 podStartE2EDuration="18.816019893s" podCreationTimestamp="2025-12-06 06:44:30 +0000 UTC" firstStartedPulling="2025-12-06 06:44:44.211515967 +0000 UTC m=+1185.497267927" lastFinishedPulling="2025-12-06 06:44:48.274689585 +0000 UTC m=+1189.560441545" observedRunningTime="2025-12-06 06:44:48.814140788 +0000 UTC m=+1190.099892758" watchObservedRunningTime="2025-12-06 06:44:48.816019893 +0000 UTC m=+1190.101771853" Dec 06 06:44:49 crc kubenswrapper[4823]: I1206 06:44:49.298320 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-94t86-config-48b7l"] Dec 06 06:44:49 crc kubenswrapper[4823]: W1206 06:44:49.300807 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b5a30ac_6042_439e_b5ec_fcd9302b475f.slice/crio-625d4d2c003e050d2e4d35d911a0cf93d5833d6fe4cf3f0acb85d707cb1be34c WatchSource:0}: Error finding container 625d4d2c003e050d2e4d35d911a0cf93d5833d6fe4cf3f0acb85d707cb1be34c: Status 404 returned error can't find the container with id 625d4d2c003e050d2e4d35d911a0cf93d5833d6fe4cf3f0acb85d707cb1be34c Dec 06 06:44:49 crc kubenswrapper[4823]: I1206 06:44:49.790476 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-94t86-config-48b7l" event={"ID":"6b5a30ac-6042-439e-b5ec-fcd9302b475f","Type":"ContainerStarted","Data":"625d4d2c003e050d2e4d35d911a0cf93d5833d6fe4cf3f0acb85d707cb1be34c"} Dec 06 06:44:49 crc kubenswrapper[4823]: I1206 06:44:49.792486 4823 generic.go:334] "Generic (PLEG): container finished" podID="3c7ecce4-d359-486f-9386-057202b69efd" containerID="9957dae732164f5c67fc2695ec4e15ce678f7bfdaa4f20525e0b217e88ca4f3e" exitCode=0 Dec 06 06:44:49 crc kubenswrapper[4823]: I1206 06:44:49.792728 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3c7ecce4-d359-486f-9386-057202b69efd","Type":"ContainerDied","Data":"9957dae732164f5c67fc2695ec4e15ce678f7bfdaa4f20525e0b217e88ca4f3e"} Dec 06 06:44:49 crc kubenswrapper[4823]: I1206 06:44:49.850845 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" Dec 06 06:44:49 crc kubenswrapper[4823]: I1206 06:44:49.919412 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75f8fbf54c-4622b"] Dec 06 06:44:49 crc kubenswrapper[4823]: I1206 06:44:49.919987 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" podUID="7d9a34b1-6dcf-4c85-8518-210aa4c49591" containerName="dnsmasq-dns" containerID="cri-o://cd8342201160926ef9133129ae60929ca3ea4c6d7d31d321b0e90ae04a45418d" gracePeriod=10 Dec 06 06:44:50 crc kubenswrapper[4823]: I1206 06:44:50.169864 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" podUID="7d9a34b1-6dcf-4c85-8518-210aa4c49591" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: connect: connection refused" Dec 06 06:44:50 crc kubenswrapper[4823]: I1206 06:44:50.806554 4823 generic.go:334] "Generic (PLEG): container finished" podID="7d9a34b1-6dcf-4c85-8518-210aa4c49591" containerID="cd8342201160926ef9133129ae60929ca3ea4c6d7d31d321b0e90ae04a45418d" exitCode=0 Dec 06 06:44:50 crc kubenswrapper[4823]: I1206 06:44:50.806695 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" event={"ID":"7d9a34b1-6dcf-4c85-8518-210aa4c49591","Type":"ContainerDied","Data":"cd8342201160926ef9133129ae60929ca3ea4c6d7d31d321b0e90ae04a45418d"} Dec 06 06:44:50 crc kubenswrapper[4823]: I1206 06:44:50.810623 4823 generic.go:334] "Generic (PLEG): container finished" podID="6b5a30ac-6042-439e-b5ec-fcd9302b475f" containerID="438ae53c9cc8a28fdb62cfcc2dc2f5b24d39e6e6cb1b54d73d8c68fd8eb67b06" exitCode=0 Dec 06 06:44:50 crc kubenswrapper[4823]: I1206 06:44:50.811217 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-94t86-config-48b7l" event={"ID":"6b5a30ac-6042-439e-b5ec-fcd9302b475f","Type":"ContainerDied","Data":"438ae53c9cc8a28fdb62cfcc2dc2f5b24d39e6e6cb1b54d73d8c68fd8eb67b06"} Dec 06 06:44:50 crc kubenswrapper[4823]: I1206 06:44:50.813869 4823 generic.go:334] "Generic (PLEG): container finished" podID="807fbfb1-90fe-4325-a0ac-09b309c77172" containerID="7a5bed63e275100585b394ef20b463de889ed8df1d45c00b86053347a156377a" exitCode=0 Dec 06 06:44:50 crc kubenswrapper[4823]: I1206 06:44:50.813922 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"807fbfb1-90fe-4325-a0ac-09b309c77172","Type":"ContainerDied","Data":"7a5bed63e275100585b394ef20b463de889ed8df1d45c00b86053347a156377a"} Dec 06 06:44:50 crc kubenswrapper[4823]: I1206 06:44:50.816066 4823 generic.go:334] "Generic (PLEG): container finished" podID="b6649430-bcca-4949-82d4-f15ac31f36e1" containerID="0776206f34203b847fb0e2a163495a19a5df309bb43a5225640adb9c593c762e" exitCode=0 Dec 06 06:44:50 crc kubenswrapper[4823]: I1206 06:44:50.816155 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"b6649430-bcca-4949-82d4-f15ac31f36e1","Type":"ContainerDied","Data":"0776206f34203b847fb0e2a163495a19a5df309bb43a5225640adb9c593c762e"} Dec 06 06:44:50 crc kubenswrapper[4823]: I1206 06:44:50.819136 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3c7ecce4-d359-486f-9386-057202b69efd","Type":"ContainerStarted","Data":"9a13cf91bdfbbbe373c5175e3d6d15d45934ed383cd43480359bc8464e82914c"} Dec 06 06:44:50 crc kubenswrapper[4823]: I1206 06:44:50.819710 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 06 06:44:50 crc kubenswrapper[4823]: I1206 06:44:50.969243 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.674224266 podStartE2EDuration="1m39.969221401s" podCreationTimestamp="2025-12-06 06:43:11 +0000 UTC" firstStartedPulling="2025-12-06 06:43:13.919836467 +0000 UTC m=+1095.205588427" lastFinishedPulling="2025-12-06 06:44:15.214833602 +0000 UTC m=+1156.500585562" observedRunningTime="2025-12-06 06:44:50.900461551 +0000 UTC m=+1192.186213531" watchObservedRunningTime="2025-12-06 06:44:50.969221401 +0000 UTC m=+1192.254973361" Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.059216 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.198931 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 06 06:44:51 crc kubenswrapper[4823]: E1206 06:44:51.203242 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:d972f4faa5e9c121402d23ed85002f26af48ec36b1b71a7489d677b3913d08b4\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="d40b985e-9817-453a-8a4b-72d7eadf4683" Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.204543 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.212708 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d9a34b1-6dcf-4c85-8518-210aa4c49591-ovsdbserver-nb\") pod \"7d9a34b1-6dcf-4c85-8518-210aa4c49591\" (UID: \"7d9a34b1-6dcf-4c85-8518-210aa4c49591\") " Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.212817 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d9a34b1-6dcf-4c85-8518-210aa4c49591-config\") pod \"7d9a34b1-6dcf-4c85-8518-210aa4c49591\" (UID: \"7d9a34b1-6dcf-4c85-8518-210aa4c49591\") " Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.212892 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85jgq\" (UniqueName: \"kubernetes.io/projected/7d9a34b1-6dcf-4c85-8518-210aa4c49591-kube-api-access-85jgq\") pod \"7d9a34b1-6dcf-4c85-8518-210aa4c49591\" (UID: \"7d9a34b1-6dcf-4c85-8518-210aa4c49591\") " Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.212976 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d9a34b1-6dcf-4c85-8518-210aa4c49591-ovsdbserver-sb\") pod \"7d9a34b1-6dcf-4c85-8518-210aa4c49591\" (UID: \"7d9a34b1-6dcf-4c85-8518-210aa4c49591\") " Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.213041 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d9a34b1-6dcf-4c85-8518-210aa4c49591-dns-svc\") pod \"7d9a34b1-6dcf-4c85-8518-210aa4c49591\" (UID: \"7d9a34b1-6dcf-4c85-8518-210aa4c49591\") " Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.221876 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d9a34b1-6dcf-4c85-8518-210aa4c49591-kube-api-access-85jgq" (OuterVolumeSpecName: "kube-api-access-85jgq") pod "7d9a34b1-6dcf-4c85-8518-210aa4c49591" (UID: "7d9a34b1-6dcf-4c85-8518-210aa4c49591"). InnerVolumeSpecName "kube-api-access-85jgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.268940 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d9a34b1-6dcf-4c85-8518-210aa4c49591-config" (OuterVolumeSpecName: "config") pod "7d9a34b1-6dcf-4c85-8518-210aa4c49591" (UID: "7d9a34b1-6dcf-4c85-8518-210aa4c49591"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.281869 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d9a34b1-6dcf-4c85-8518-210aa4c49591-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7d9a34b1-6dcf-4c85-8518-210aa4c49591" (UID: "7d9a34b1-6dcf-4c85-8518-210aa4c49591"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.296678 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d9a34b1-6dcf-4c85-8518-210aa4c49591-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7d9a34b1-6dcf-4c85-8518-210aa4c49591" (UID: "7d9a34b1-6dcf-4c85-8518-210aa4c49591"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.304306 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d9a34b1-6dcf-4c85-8518-210aa4c49591-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7d9a34b1-6dcf-4c85-8518-210aa4c49591" (UID: "7d9a34b1-6dcf-4c85-8518-210aa4c49591"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.314985 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d9a34b1-6dcf-4c85-8518-210aa4c49591-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.315021 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d9a34b1-6dcf-4c85-8518-210aa4c49591-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.315033 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d9a34b1-6dcf-4c85-8518-210aa4c49591-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.315049 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d9a34b1-6dcf-4c85-8518-210aa4c49591-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.315062 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85jgq\" (UniqueName: \"kubernetes.io/projected/7d9a34b1-6dcf-4c85-8518-210aa4c49591-kube-api-access-85jgq\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.831418 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"b6649430-bcca-4949-82d4-f15ac31f36e1","Type":"ContainerStarted","Data":"f7efdf192cf3ef0bd3c62f76eed171e2605778c8cc18f4b5254ec83b442a2682"} Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.832380 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.833857 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" event={"ID":"7d9a34b1-6dcf-4c85-8518-210aa4c49591","Type":"ContainerDied","Data":"90843e9e4f8394e26da7c1089262dcd0c489d9ea8fad713fbaa3ed7bb1ae2cc5"} Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.833888 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75f8fbf54c-4622b" Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.833988 4823 scope.go:117] "RemoveContainer" containerID="cd8342201160926ef9133129ae60929ca3ea4c6d7d31d321b0e90ae04a45418d" Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.836843 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"807fbfb1-90fe-4325-a0ac-09b309c77172","Type":"ContainerStarted","Data":"2aa6f450971c1e8a9d97d5edad42e9ba460acafa208bade36d92d0f33d45a43d"} Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.837334 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:44:51 crc kubenswrapper[4823]: E1206 06:44:51.838783 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:d972f4faa5e9c121402d23ed85002f26af48ec36b1b71a7489d677b3913d08b4\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="d40b985e-9817-453a-8a4b-72d7eadf4683" Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.839431 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.858166 4823 scope.go:117] "RemoveContainer" containerID="68c978611404224898a6b1716ddaa7df0ff38880ef50e3a9df027dcb17308436" Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.867188 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-notifications-server-0" podStartSLOduration=-9223371936.987614 podStartE2EDuration="1m39.86716299s" podCreationTimestamp="2025-12-06 06:43:12 +0000 UTC" firstStartedPulling="2025-12-06 06:43:15.416490657 +0000 UTC m=+1096.702242617" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:44:51.862591318 +0000 UTC m=+1193.148343298" watchObservedRunningTime="2025-12-06 06:44:51.86716299 +0000 UTC m=+1193.152914950" Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.943757 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371935.911045 podStartE2EDuration="1m40.943730296s" podCreationTimestamp="2025-12-06 06:43:11 +0000 UTC" firstStartedPulling="2025-12-06 06:43:14.384877389 +0000 UTC m=+1095.670629349" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:44:51.93624236 +0000 UTC m=+1193.221994350" watchObservedRunningTime="2025-12-06 06:44:51.943730296 +0000 UTC m=+1193.229482256" Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.963773 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75f8fbf54c-4622b"] Dec 06 06:44:51 crc kubenswrapper[4823]: I1206 06:44:51.971924 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75f8fbf54c-4622b"] Dec 06 06:44:52 crc kubenswrapper[4823]: I1206 06:44:52.249720 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-94t86-config-48b7l" Dec 06 06:44:52 crc kubenswrapper[4823]: I1206 06:44:52.334781 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6b5a30ac-6042-439e-b5ec-fcd9302b475f-var-run\") pod \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\" (UID: \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\") " Dec 06 06:44:52 crc kubenswrapper[4823]: I1206 06:44:52.334850 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b5a30ac-6042-439e-b5ec-fcd9302b475f-scripts\") pod \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\" (UID: \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\") " Dec 06 06:44:52 crc kubenswrapper[4823]: I1206 06:44:52.335026 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs7vm\" (UniqueName: \"kubernetes.io/projected/6b5a30ac-6042-439e-b5ec-fcd9302b475f-kube-api-access-cs7vm\") pod \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\" (UID: \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\") " Dec 06 06:44:52 crc kubenswrapper[4823]: I1206 06:44:52.335053 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b5a30ac-6042-439e-b5ec-fcd9302b475f-var-run" (OuterVolumeSpecName: "var-run") pod "6b5a30ac-6042-439e-b5ec-fcd9302b475f" (UID: "6b5a30ac-6042-439e-b5ec-fcd9302b475f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:44:52 crc kubenswrapper[4823]: I1206 06:44:52.335058 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b5a30ac-6042-439e-b5ec-fcd9302b475f-var-run-ovn\") pod \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\" (UID: \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\") " Dec 06 06:44:52 crc kubenswrapper[4823]: I1206 06:44:52.335100 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6b5a30ac-6042-439e-b5ec-fcd9302b475f-additional-scripts\") pod \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\" (UID: \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\") " Dec 06 06:44:52 crc kubenswrapper[4823]: I1206 06:44:52.335150 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6b5a30ac-6042-439e-b5ec-fcd9302b475f-var-log-ovn\") pod \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\" (UID: \"6b5a30ac-6042-439e-b5ec-fcd9302b475f\") " Dec 06 06:44:52 crc kubenswrapper[4823]: I1206 06:44:52.335216 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b5a30ac-6042-439e-b5ec-fcd9302b475f-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "6b5a30ac-6042-439e-b5ec-fcd9302b475f" (UID: "6b5a30ac-6042-439e-b5ec-fcd9302b475f"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:44:52 crc kubenswrapper[4823]: I1206 06:44:52.335350 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b5a30ac-6042-439e-b5ec-fcd9302b475f-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "6b5a30ac-6042-439e-b5ec-fcd9302b475f" (UID: "6b5a30ac-6042-439e-b5ec-fcd9302b475f"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:44:52 crc kubenswrapper[4823]: I1206 06:44:52.335781 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b5a30ac-6042-439e-b5ec-fcd9302b475f-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "6b5a30ac-6042-439e-b5ec-fcd9302b475f" (UID: "6b5a30ac-6042-439e-b5ec-fcd9302b475f"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:52 crc kubenswrapper[4823]: I1206 06:44:52.336165 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b5a30ac-6042-439e-b5ec-fcd9302b475f-scripts" (OuterVolumeSpecName: "scripts") pod "6b5a30ac-6042-439e-b5ec-fcd9302b475f" (UID: "6b5a30ac-6042-439e-b5ec-fcd9302b475f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:52 crc kubenswrapper[4823]: I1206 06:44:52.336568 4823 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6b5a30ac-6042-439e-b5ec-fcd9302b475f-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:52 crc kubenswrapper[4823]: I1206 06:44:52.336592 4823 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6b5a30ac-6042-439e-b5ec-fcd9302b475f-var-run\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:52 crc kubenswrapper[4823]: I1206 06:44:52.336603 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b5a30ac-6042-439e-b5ec-fcd9302b475f-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:52 crc kubenswrapper[4823]: I1206 06:44:52.336615 4823 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b5a30ac-6042-439e-b5ec-fcd9302b475f-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:52 crc kubenswrapper[4823]: I1206 06:44:52.336627 4823 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6b5a30ac-6042-439e-b5ec-fcd9302b475f-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:52 crc kubenswrapper[4823]: I1206 06:44:52.340335 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b5a30ac-6042-439e-b5ec-fcd9302b475f-kube-api-access-cs7vm" (OuterVolumeSpecName: "kube-api-access-cs7vm") pod "6b5a30ac-6042-439e-b5ec-fcd9302b475f" (UID: "6b5a30ac-6042-439e-b5ec-fcd9302b475f"). InnerVolumeSpecName "kube-api-access-cs7vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:44:52 crc kubenswrapper[4823]: I1206 06:44:52.437956 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs7vm\" (UniqueName: \"kubernetes.io/projected/6b5a30ac-6042-439e-b5ec-fcd9302b475f-kube-api-access-cs7vm\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:52 crc kubenswrapper[4823]: I1206 06:44:52.849111 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-94t86-config-48b7l" Dec 06 06:44:52 crc kubenswrapper[4823]: I1206 06:44:52.849367 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-94t86-config-48b7l" event={"ID":"6b5a30ac-6042-439e-b5ec-fcd9302b475f","Type":"ContainerDied","Data":"625d4d2c003e050d2e4d35d911a0cf93d5833d6fe4cf3f0acb85d707cb1be34c"} Dec 06 06:44:52 crc kubenswrapper[4823]: I1206 06:44:52.850912 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="625d4d2c003e050d2e4d35d911a0cf93d5833d6fe4cf3f0acb85d707cb1be34c" Dec 06 06:44:52 crc kubenswrapper[4823]: E1206 06:44:52.851752 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:d972f4faa5e9c121402d23ed85002f26af48ec36b1b71a7489d677b3913d08b4\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="d40b985e-9817-453a-8a4b-72d7eadf4683" Dec 06 06:44:53 crc kubenswrapper[4823]: I1206 06:44:53.152333 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d9a34b1-6dcf-4c85-8518-210aa4c49591" path="/var/lib/kubelet/pods/7d9a34b1-6dcf-4c85-8518-210aa4c49591/volumes" Dec 06 06:44:53 crc kubenswrapper[4823]: I1206 06:44:53.217337 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-94t86" Dec 06 06:44:53 crc kubenswrapper[4823]: I1206 06:44:53.375065 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-94t86-config-48b7l"] Dec 06 06:44:53 crc kubenswrapper[4823]: I1206 06:44:53.384684 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-94t86-config-48b7l"] Dec 06 06:44:55 crc kubenswrapper[4823]: I1206 06:44:55.154179 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b5a30ac-6042-439e-b5ec-fcd9302b475f" path="/var/lib/kubelet/pods/6b5a30ac-6042-439e-b5ec-fcd9302b475f/volumes" Dec 06 06:44:55 crc kubenswrapper[4823]: I1206 06:44:55.411238 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 06 06:44:57 crc kubenswrapper[4823]: I1206 06:44:57.890507 4823 generic.go:334] "Generic (PLEG): container finished" podID="a539f115-9bb8-4282-9f99-c198920d4bb9" containerID="7db686db2e30fc413a298f05fd6d014419d8f66d95a52863ba4a68d95459d5b3" exitCode=0 Dec 06 06:44:57 crc kubenswrapper[4823]: I1206 06:44:57.890583 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rl26v" event={"ID":"a539f115-9bb8-4282-9f99-c198920d4bb9","Type":"ContainerDied","Data":"7db686db2e30fc413a298f05fd6d014419d8f66d95a52863ba4a68d95459d5b3"} Dec 06 06:44:59 crc kubenswrapper[4823]: I1206 06:44:59.272960 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rl26v" Dec 06 06:44:59 crc kubenswrapper[4823]: I1206 06:44:59.460767 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a539f115-9bb8-4282-9f99-c198920d4bb9-combined-ca-bundle\") pod \"a539f115-9bb8-4282-9f99-c198920d4bb9\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " Dec 06 06:44:59 crc kubenswrapper[4823]: I1206 06:44:59.460818 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a539f115-9bb8-4282-9f99-c198920d4bb9-swiftconf\") pod \"a539f115-9bb8-4282-9f99-c198920d4bb9\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " Dec 06 06:44:59 crc kubenswrapper[4823]: I1206 06:44:59.460844 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpj7m\" (UniqueName: \"kubernetes.io/projected/a539f115-9bb8-4282-9f99-c198920d4bb9-kube-api-access-jpj7m\") pod \"a539f115-9bb8-4282-9f99-c198920d4bb9\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " Dec 06 06:44:59 crc kubenswrapper[4823]: I1206 06:44:59.460884 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a539f115-9bb8-4282-9f99-c198920d4bb9-etc-swift\") pod \"a539f115-9bb8-4282-9f99-c198920d4bb9\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " Dec 06 06:44:59 crc kubenswrapper[4823]: I1206 06:44:59.460919 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a539f115-9bb8-4282-9f99-c198920d4bb9-dispersionconf\") pod \"a539f115-9bb8-4282-9f99-c198920d4bb9\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " Dec 06 06:44:59 crc kubenswrapper[4823]: I1206 06:44:59.460993 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a539f115-9bb8-4282-9f99-c198920d4bb9-ring-data-devices\") pod \"a539f115-9bb8-4282-9f99-c198920d4bb9\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " Dec 06 06:44:59 crc kubenswrapper[4823]: I1206 06:44:59.461119 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a539f115-9bb8-4282-9f99-c198920d4bb9-scripts\") pod \"a539f115-9bb8-4282-9f99-c198920d4bb9\" (UID: \"a539f115-9bb8-4282-9f99-c198920d4bb9\") " Dec 06 06:44:59 crc kubenswrapper[4823]: I1206 06:44:59.462124 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a539f115-9bb8-4282-9f99-c198920d4bb9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a539f115-9bb8-4282-9f99-c198920d4bb9" (UID: "a539f115-9bb8-4282-9f99-c198920d4bb9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:44:59 crc kubenswrapper[4823]: I1206 06:44:59.463181 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a539f115-9bb8-4282-9f99-c198920d4bb9-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "a539f115-9bb8-4282-9f99-c198920d4bb9" (UID: "a539f115-9bb8-4282-9f99-c198920d4bb9"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:59 crc kubenswrapper[4823]: I1206 06:44:59.466948 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a539f115-9bb8-4282-9f99-c198920d4bb9-kube-api-access-jpj7m" (OuterVolumeSpecName: "kube-api-access-jpj7m") pod "a539f115-9bb8-4282-9f99-c198920d4bb9" (UID: "a539f115-9bb8-4282-9f99-c198920d4bb9"). InnerVolumeSpecName "kube-api-access-jpj7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:44:59 crc kubenswrapper[4823]: I1206 06:44:59.469988 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a539f115-9bb8-4282-9f99-c198920d4bb9-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "a539f115-9bb8-4282-9f99-c198920d4bb9" (UID: "a539f115-9bb8-4282-9f99-c198920d4bb9"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:44:59 crc kubenswrapper[4823]: I1206 06:44:59.485407 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a539f115-9bb8-4282-9f99-c198920d4bb9-scripts" (OuterVolumeSpecName: "scripts") pod "a539f115-9bb8-4282-9f99-c198920d4bb9" (UID: "a539f115-9bb8-4282-9f99-c198920d4bb9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:44:59 crc kubenswrapper[4823]: I1206 06:44:59.487174 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a539f115-9bb8-4282-9f99-c198920d4bb9-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "a539f115-9bb8-4282-9f99-c198920d4bb9" (UID: "a539f115-9bb8-4282-9f99-c198920d4bb9"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:44:59 crc kubenswrapper[4823]: I1206 06:44:59.489030 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a539f115-9bb8-4282-9f99-c198920d4bb9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a539f115-9bb8-4282-9f99-c198920d4bb9" (UID: "a539f115-9bb8-4282-9f99-c198920d4bb9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:44:59 crc kubenswrapper[4823]: I1206 06:44:59.562865 4823 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a539f115-9bb8-4282-9f99-c198920d4bb9-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:59 crc kubenswrapper[4823]: I1206 06:44:59.562900 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a539f115-9bb8-4282-9f99-c198920d4bb9-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:59 crc kubenswrapper[4823]: I1206 06:44:59.562909 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a539f115-9bb8-4282-9f99-c198920d4bb9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:59 crc kubenswrapper[4823]: I1206 06:44:59.562918 4823 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a539f115-9bb8-4282-9f99-c198920d4bb9-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:59 crc kubenswrapper[4823]: I1206 06:44:59.562926 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpj7m\" (UniqueName: \"kubernetes.io/projected/a539f115-9bb8-4282-9f99-c198920d4bb9-kube-api-access-jpj7m\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:59 crc kubenswrapper[4823]: I1206 06:44:59.562938 4823 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a539f115-9bb8-4282-9f99-c198920d4bb9-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:59 crc kubenswrapper[4823]: I1206 06:44:59.562945 4823 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a539f115-9bb8-4282-9f99-c198920d4bb9-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 06 06:44:59 crc kubenswrapper[4823]: I1206 06:44:59.933028 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rl26v" event={"ID":"a539f115-9bb8-4282-9f99-c198920d4bb9","Type":"ContainerDied","Data":"33ba27976bf1b5e21b0d0544f00fc527474c68a0db524cdff937fe74655481e5"} Dec 06 06:44:59 crc kubenswrapper[4823]: I1206 06:44:59.933106 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33ba27976bf1b5e21b0d0544f00fc527474c68a0db524cdff937fe74655481e5" Dec 06 06:44:59 crc kubenswrapper[4823]: I1206 06:44:59.933144 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rl26v" Dec 06 06:45:00 crc kubenswrapper[4823]: I1206 06:45:00.153797 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416725-xs28j"] Dec 06 06:45:00 crc kubenswrapper[4823]: E1206 06:45:00.154235 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a539f115-9bb8-4282-9f99-c198920d4bb9" containerName="swift-ring-rebalance" Dec 06 06:45:00 crc kubenswrapper[4823]: I1206 06:45:00.154249 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a539f115-9bb8-4282-9f99-c198920d4bb9" containerName="swift-ring-rebalance" Dec 06 06:45:00 crc kubenswrapper[4823]: E1206 06:45:00.154272 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d9a34b1-6dcf-4c85-8518-210aa4c49591" containerName="dnsmasq-dns" Dec 06 06:45:00 crc kubenswrapper[4823]: I1206 06:45:00.154279 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d9a34b1-6dcf-4c85-8518-210aa4c49591" containerName="dnsmasq-dns" Dec 06 06:45:00 crc kubenswrapper[4823]: E1206 06:45:00.154290 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b5a30ac-6042-439e-b5ec-fcd9302b475f" containerName="ovn-config" Dec 06 06:45:00 crc kubenswrapper[4823]: I1206 06:45:00.154297 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b5a30ac-6042-439e-b5ec-fcd9302b475f" containerName="ovn-config" Dec 06 06:45:00 crc kubenswrapper[4823]: E1206 06:45:00.154320 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d9a34b1-6dcf-4c85-8518-210aa4c49591" containerName="init" Dec 06 06:45:00 crc kubenswrapper[4823]: I1206 06:45:00.154343 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d9a34b1-6dcf-4c85-8518-210aa4c49591" containerName="init" Dec 06 06:45:00 crc kubenswrapper[4823]: I1206 06:45:00.154520 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d9a34b1-6dcf-4c85-8518-210aa4c49591" containerName="dnsmasq-dns" Dec 06 06:45:00 crc kubenswrapper[4823]: I1206 06:45:00.154533 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="a539f115-9bb8-4282-9f99-c198920d4bb9" containerName="swift-ring-rebalance" Dec 06 06:45:00 crc kubenswrapper[4823]: I1206 06:45:00.154547 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b5a30ac-6042-439e-b5ec-fcd9302b475f" containerName="ovn-config" Dec 06 06:45:00 crc kubenswrapper[4823]: I1206 06:45:00.155286 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-xs28j" Dec 06 06:45:00 crc kubenswrapper[4823]: I1206 06:45:00.160575 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 06:45:00 crc kubenswrapper[4823]: I1206 06:45:00.170561 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 06:45:00 crc kubenswrapper[4823]: I1206 06:45:00.170580 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416725-xs28j"] Dec 06 06:45:00 crc kubenswrapper[4823]: I1206 06:45:00.274258 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52ebc36f-9460-42e9-a2e3-e6be86efaacb-secret-volume\") pod \"collect-profiles-29416725-xs28j\" (UID: \"52ebc36f-9460-42e9-a2e3-e6be86efaacb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-xs28j" Dec 06 06:45:00 crc kubenswrapper[4823]: I1206 06:45:00.274350 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52ebc36f-9460-42e9-a2e3-e6be86efaacb-config-volume\") pod \"collect-profiles-29416725-xs28j\" (UID: \"52ebc36f-9460-42e9-a2e3-e6be86efaacb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-xs28j" Dec 06 06:45:00 crc kubenswrapper[4823]: I1206 06:45:00.275780 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26km5\" (UniqueName: \"kubernetes.io/projected/52ebc36f-9460-42e9-a2e3-e6be86efaacb-kube-api-access-26km5\") pod \"collect-profiles-29416725-xs28j\" (UID: \"52ebc36f-9460-42e9-a2e3-e6be86efaacb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-xs28j" Dec 06 06:45:00 crc kubenswrapper[4823]: I1206 06:45:00.377431 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52ebc36f-9460-42e9-a2e3-e6be86efaacb-secret-volume\") pod \"collect-profiles-29416725-xs28j\" (UID: \"52ebc36f-9460-42e9-a2e3-e6be86efaacb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-xs28j" Dec 06 06:45:00 crc kubenswrapper[4823]: I1206 06:45:00.377801 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52ebc36f-9460-42e9-a2e3-e6be86efaacb-config-volume\") pod \"collect-profiles-29416725-xs28j\" (UID: \"52ebc36f-9460-42e9-a2e3-e6be86efaacb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-xs28j" Dec 06 06:45:00 crc kubenswrapper[4823]: I1206 06:45:00.377957 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26km5\" (UniqueName: \"kubernetes.io/projected/52ebc36f-9460-42e9-a2e3-e6be86efaacb-kube-api-access-26km5\") pod \"collect-profiles-29416725-xs28j\" (UID: \"52ebc36f-9460-42e9-a2e3-e6be86efaacb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-xs28j" Dec 06 06:45:00 crc kubenswrapper[4823]: I1206 06:45:00.379327 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52ebc36f-9460-42e9-a2e3-e6be86efaacb-config-volume\") pod \"collect-profiles-29416725-xs28j\" (UID: \"52ebc36f-9460-42e9-a2e3-e6be86efaacb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-xs28j" Dec 06 06:45:00 crc kubenswrapper[4823]: I1206 06:45:00.385193 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52ebc36f-9460-42e9-a2e3-e6be86efaacb-secret-volume\") pod \"collect-profiles-29416725-xs28j\" (UID: \"52ebc36f-9460-42e9-a2e3-e6be86efaacb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-xs28j" Dec 06 06:45:00 crc kubenswrapper[4823]: I1206 06:45:00.398329 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26km5\" (UniqueName: \"kubernetes.io/projected/52ebc36f-9460-42e9-a2e3-e6be86efaacb-kube-api-access-26km5\") pod \"collect-profiles-29416725-xs28j\" (UID: \"52ebc36f-9460-42e9-a2e3-e6be86efaacb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-xs28j" Dec 06 06:45:00 crc kubenswrapper[4823]: I1206 06:45:00.479814 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-xs28j" Dec 06 06:45:00 crc kubenswrapper[4823]: I1206 06:45:00.976347 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416725-xs28j"] Dec 06 06:45:01 crc kubenswrapper[4823]: I1206 06:45:01.955017 4823 generic.go:334] "Generic (PLEG): container finished" podID="52ebc36f-9460-42e9-a2e3-e6be86efaacb" containerID="8de692e587afe423c20918f3b539a4fcbab0a68c25985d26dadfb57ac291c8ff" exitCode=0 Dec 06 06:45:01 crc kubenswrapper[4823]: I1206 06:45:01.955158 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-xs28j" event={"ID":"52ebc36f-9460-42e9-a2e3-e6be86efaacb","Type":"ContainerDied","Data":"8de692e587afe423c20918f3b539a4fcbab0a68c25985d26dadfb57ac291c8ff"} Dec 06 06:45:01 crc kubenswrapper[4823]: I1206 06:45:01.955647 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-xs28j" event={"ID":"52ebc36f-9460-42e9-a2e3-e6be86efaacb","Type":"ContainerStarted","Data":"57e4350da0fe27dd308ed3a3b577d72266b764556cee96cf8d763d562b986f99"} Dec 06 06:45:02 crc kubenswrapper[4823]: I1206 06:45:02.516588 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-etc-swift\") pod \"swift-storage-0\" (UID: \"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5\") " pod="openstack/swift-storage-0" Dec 06 06:45:02 crc kubenswrapper[4823]: I1206 06:45:02.524098 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5-etc-swift\") pod \"swift-storage-0\" (UID: \"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5\") " pod="openstack/swift-storage-0" Dec 06 06:45:02 crc kubenswrapper[4823]: I1206 06:45:02.779151 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 06 06:45:03 crc kubenswrapper[4823]: I1206 06:45:03.041851 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="3c7ecce4-d359-486f-9386-057202b69efd" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Dec 06 06:45:04 crc kubenswrapper[4823]: I1206 06:45:03.233591 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="807fbfb1-90fe-4325-a0ac-09b309c77172" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Dec 06 06:45:04 crc kubenswrapper[4823]: I1206 06:45:03.377479 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-xs28j" Dec 06 06:45:04 crc kubenswrapper[4823]: I1206 06:45:03.426389 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 06 06:45:04 crc kubenswrapper[4823]: W1206 06:45:03.446470 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf5b8da6_1e4a_4d07_a1af_3a4ab2aa2ce5.slice/crio-efd67658571a8bb37ce15856942743dbef5b3cf12b042a54c24a5fc71e7b1565 WatchSource:0}: Error finding container efd67658571a8bb37ce15856942743dbef5b3cf12b042a54c24a5fc71e7b1565: Status 404 returned error can't find the container with id efd67658571a8bb37ce15856942743dbef5b3cf12b042a54c24a5fc71e7b1565 Dec 06 06:45:04 crc kubenswrapper[4823]: I1206 06:45:03.534047 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52ebc36f-9460-42e9-a2e3-e6be86efaacb-config-volume\") pod \"52ebc36f-9460-42e9-a2e3-e6be86efaacb\" (UID: \"52ebc36f-9460-42e9-a2e3-e6be86efaacb\") " Dec 06 06:45:04 crc kubenswrapper[4823]: I1206 06:45:03.534196 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26km5\" (UniqueName: \"kubernetes.io/projected/52ebc36f-9460-42e9-a2e3-e6be86efaacb-kube-api-access-26km5\") pod \"52ebc36f-9460-42e9-a2e3-e6be86efaacb\" (UID: \"52ebc36f-9460-42e9-a2e3-e6be86efaacb\") " Dec 06 06:45:04 crc kubenswrapper[4823]: I1206 06:45:03.534391 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52ebc36f-9460-42e9-a2e3-e6be86efaacb-secret-volume\") pod \"52ebc36f-9460-42e9-a2e3-e6be86efaacb\" (UID: \"52ebc36f-9460-42e9-a2e3-e6be86efaacb\") " Dec 06 06:45:04 crc kubenswrapper[4823]: I1206 06:45:03.535236 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52ebc36f-9460-42e9-a2e3-e6be86efaacb-config-volume" (OuterVolumeSpecName: "config-volume") pod "52ebc36f-9460-42e9-a2e3-e6be86efaacb" (UID: "52ebc36f-9460-42e9-a2e3-e6be86efaacb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:45:04 crc kubenswrapper[4823]: I1206 06:45:03.543063 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ebc36f-9460-42e9-a2e3-e6be86efaacb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "52ebc36f-9460-42e9-a2e3-e6be86efaacb" (UID: "52ebc36f-9460-42e9-a2e3-e6be86efaacb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:45:04 crc kubenswrapper[4823]: I1206 06:45:03.544206 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52ebc36f-9460-42e9-a2e3-e6be86efaacb-kube-api-access-26km5" (OuterVolumeSpecName: "kube-api-access-26km5") pod "52ebc36f-9460-42e9-a2e3-e6be86efaacb" (UID: "52ebc36f-9460-42e9-a2e3-e6be86efaacb"). InnerVolumeSpecName "kube-api-access-26km5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:45:04 crc kubenswrapper[4823]: I1206 06:45:03.636739 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52ebc36f-9460-42e9-a2e3-e6be86efaacb-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:04 crc kubenswrapper[4823]: I1206 06:45:03.636778 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52ebc36f-9460-42e9-a2e3-e6be86efaacb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:04 crc kubenswrapper[4823]: I1206 06:45:03.636789 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26km5\" (UniqueName: \"kubernetes.io/projected/52ebc36f-9460-42e9-a2e3-e6be86efaacb-kube-api-access-26km5\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:04 crc kubenswrapper[4823]: I1206 06:45:03.989631 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5","Type":"ContainerStarted","Data":"efd67658571a8bb37ce15856942743dbef5b3cf12b042a54c24a5fc71e7b1565"} Dec 06 06:45:04 crc kubenswrapper[4823]: I1206 06:45:03.991093 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-xs28j" event={"ID":"52ebc36f-9460-42e9-a2e3-e6be86efaacb","Type":"ContainerDied","Data":"57e4350da0fe27dd308ed3a3b577d72266b764556cee96cf8d763d562b986f99"} Dec 06 06:45:04 crc kubenswrapper[4823]: I1206 06:45:03.991116 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57e4350da0fe27dd308ed3a3b577d72266b764556cee96cf8d763d562b986f99" Dec 06 06:45:04 crc kubenswrapper[4823]: I1206 06:45:03.991189 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-xs28j" Dec 06 06:45:04 crc kubenswrapper[4823]: I1206 06:45:04.465734 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-notifications-server-0" podUID="b6649430-bcca-4949-82d4-f15ac31f36e1" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Dec 06 06:45:05 crc kubenswrapper[4823]: I1206 06:45:05.001201 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5","Type":"ContainerStarted","Data":"ff2bcc4f3b428211c7707980e9f0cf94d2c600bea50dff9313f187209d983177"} Dec 06 06:45:05 crc kubenswrapper[4823]: I1206 06:45:05.001248 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5","Type":"ContainerStarted","Data":"40be02f9761aed2dc37a1d7acd5ccf2697061e4977283962d42551ecddd5d6be"} Dec 06 06:45:06 crc kubenswrapper[4823]: I1206 06:45:06.014825 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5","Type":"ContainerStarted","Data":"353a38aa7e37a3342bc9acc54a72f2d49f5c46db17ebf1b3ad24408e484cfa39"} Dec 06 06:45:06 crc kubenswrapper[4823]: I1206 06:45:06.015159 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5","Type":"ContainerStarted","Data":"eb022009deb7de016891fb6fc8bf53a7587205956ae82fc1da7c38b42e0363a4"} Dec 06 06:45:06 crc kubenswrapper[4823]: I1206 06:45:06.052417 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:45:06 crc kubenswrapper[4823]: I1206 06:45:06.052492 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:45:08 crc kubenswrapper[4823]: I1206 06:45:08.035818 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d40b985e-9817-453a-8a4b-72d7eadf4683","Type":"ContainerStarted","Data":"6dbe018210f7e483f54f2d96c0d3677d204cd6cd433761c23a94289f98b2b04b"} Dec 06 06:45:08 crc kubenswrapper[4823]: I1206 06:45:08.044110 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5","Type":"ContainerStarted","Data":"d2535ccbca0b9a68ff0fc2bef38a639b97c90d0dc468daacb927d78472b1f8b5"} Dec 06 06:45:08 crc kubenswrapper[4823]: I1206 06:45:08.044146 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5","Type":"ContainerStarted","Data":"3b51a3b4aa548ddfe9f29f376f9c8ca4b7f47a761dc12d4e3fd2f94bf6149d04"} Dec 06 06:45:08 crc kubenswrapper[4823]: I1206 06:45:08.044156 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5","Type":"ContainerStarted","Data":"6dd27515ea3b19748c56f166b20b35795da8eac2bcbd5da17299a8f52015d0f1"} Dec 06 06:45:08 crc kubenswrapper[4823]: I1206 06:45:08.044164 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5","Type":"ContainerStarted","Data":"a67f03c9cb2b7e6fe5b7f3c1adefeca6c52ae99508aa1312844fa9c3d1e3842b"} Dec 06 06:45:08 crc kubenswrapper[4823]: I1206 06:45:08.070913 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=4.448148237 podStartE2EDuration="1m49.070892305s" podCreationTimestamp="2025-12-06 06:43:19 +0000 UTC" firstStartedPulling="2025-12-06 06:43:22.779450293 +0000 UTC m=+1104.065202253" lastFinishedPulling="2025-12-06 06:45:07.402194361 +0000 UTC m=+1208.687946321" observedRunningTime="2025-12-06 06:45:08.069508335 +0000 UTC m=+1209.355260325" watchObservedRunningTime="2025-12-06 06:45:08.070892305 +0000 UTC m=+1209.356644265" Dec 06 06:45:09 crc kubenswrapper[4823]: I1206 06:45:09.058419 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5","Type":"ContainerStarted","Data":"7e9f5adfbc3a01ebbb73883342c00b9ccaa7cc93e8ff5d0fedfba0de06f69902"} Dec 06 06:45:09 crc kubenswrapper[4823]: I1206 06:45:09.058981 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5","Type":"ContainerStarted","Data":"cf8f1e0f7ff561459bf4e2a93e2792b6db063d4aa4f58913a762c4ab8c8c615d"} Dec 06 06:45:10 crc kubenswrapper[4823]: I1206 06:45:10.071456 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5","Type":"ContainerStarted","Data":"d73419e93115d104b83244c8c811dfcb9667913865c26287b310fc19e9f2eb94"} Dec 06 06:45:10 crc kubenswrapper[4823]: I1206 06:45:10.667303 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 06:45:10 crc kubenswrapper[4823]: I1206 06:45:10.668760 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d40b985e-9817-453a-8a4b-72d7eadf4683" containerName="prometheus" containerID="cri-o://748e10f51e3bc3cbaaff2f08ca46a535d8d3bc3c5a5936d68015e5fac01bb798" gracePeriod=600 Dec 06 06:45:10 crc kubenswrapper[4823]: I1206 06:45:10.669302 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d40b985e-9817-453a-8a4b-72d7eadf4683" containerName="thanos-sidecar" containerID="cri-o://6dbe018210f7e483f54f2d96c0d3677d204cd6cd433761c23a94289f98b2b04b" gracePeriod=600 Dec 06 06:45:10 crc kubenswrapper[4823]: I1206 06:45:10.669371 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d40b985e-9817-453a-8a4b-72d7eadf4683" containerName="config-reloader" containerID="cri-o://6fc3fc1bd5e402f68cccf2da3cce4281bf07a01590034a05387c118a061ebfb0" gracePeriod=600 Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.083812 4823 generic.go:334] "Generic (PLEG): container finished" podID="d40b985e-9817-453a-8a4b-72d7eadf4683" containerID="6dbe018210f7e483f54f2d96c0d3677d204cd6cd433761c23a94289f98b2b04b" exitCode=0 Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.083850 4823 generic.go:334] "Generic (PLEG): container finished" podID="d40b985e-9817-453a-8a4b-72d7eadf4683" containerID="748e10f51e3bc3cbaaff2f08ca46a535d8d3bc3c5a5936d68015e5fac01bb798" exitCode=0 Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.083890 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d40b985e-9817-453a-8a4b-72d7eadf4683","Type":"ContainerDied","Data":"6dbe018210f7e483f54f2d96c0d3677d204cd6cd433761c23a94289f98b2b04b"} Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.083946 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d40b985e-9817-453a-8a4b-72d7eadf4683","Type":"ContainerDied","Data":"748e10f51e3bc3cbaaff2f08ca46a535d8d3bc3c5a5936d68015e5fac01bb798"} Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.090141 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5","Type":"ContainerStarted","Data":"1f211e77ed3766d1aa1f079f431d803b12c62d817767cfb10c54c15c2eede9f3"} Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.090195 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5","Type":"ContainerStarted","Data":"716a1094c62a32c9ec93cc8d7d3e4800ede2a1cd7b3eac2da86324761275a872"} Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.198909 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="d40b985e-9817-453a-8a4b-72d7eadf4683" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.112:9090/-/ready\": dial tcp 10.217.0.112:9090: connect: connection refused" Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.700147 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.717775 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d40b985e-9817-453a-8a4b-72d7eadf4683-config-out\") pod \"d40b985e-9817-453a-8a4b-72d7eadf4683\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.717843 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d40b985e-9817-453a-8a4b-72d7eadf4683-tls-assets\") pod \"d40b985e-9817-453a-8a4b-72d7eadf4683\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.717880 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phjxz\" (UniqueName: \"kubernetes.io/projected/d40b985e-9817-453a-8a4b-72d7eadf4683-kube-api-access-phjxz\") pod \"d40b985e-9817-453a-8a4b-72d7eadf4683\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.717933 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d40b985e-9817-453a-8a4b-72d7eadf4683-prometheus-metric-storage-rulefiles-0\") pod \"d40b985e-9817-453a-8a4b-72d7eadf4683\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.717975 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d40b985e-9817-453a-8a4b-72d7eadf4683-thanos-prometheus-http-client-file\") pod \"d40b985e-9817-453a-8a4b-72d7eadf4683\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.718109 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\") pod \"d40b985e-9817-453a-8a4b-72d7eadf4683\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.718148 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d40b985e-9817-453a-8a4b-72d7eadf4683-config\") pod \"d40b985e-9817-453a-8a4b-72d7eadf4683\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.718217 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d40b985e-9817-453a-8a4b-72d7eadf4683-web-config\") pod \"d40b985e-9817-453a-8a4b-72d7eadf4683\" (UID: \"d40b985e-9817-453a-8a4b-72d7eadf4683\") " Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.719193 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d40b985e-9817-453a-8a4b-72d7eadf4683-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "d40b985e-9817-453a-8a4b-72d7eadf4683" (UID: "d40b985e-9817-453a-8a4b-72d7eadf4683"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.728254 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40b985e-9817-453a-8a4b-72d7eadf4683-config" (OuterVolumeSpecName: "config") pod "d40b985e-9817-453a-8a4b-72d7eadf4683" (UID: "d40b985e-9817-453a-8a4b-72d7eadf4683"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.728287 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d40b985e-9817-453a-8a4b-72d7eadf4683-kube-api-access-phjxz" (OuterVolumeSpecName: "kube-api-access-phjxz") pod "d40b985e-9817-453a-8a4b-72d7eadf4683" (UID: "d40b985e-9817-453a-8a4b-72d7eadf4683"). InnerVolumeSpecName "kube-api-access-phjxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.728383 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d40b985e-9817-453a-8a4b-72d7eadf4683-config-out" (OuterVolumeSpecName: "config-out") pod "d40b985e-9817-453a-8a4b-72d7eadf4683" (UID: "d40b985e-9817-453a-8a4b-72d7eadf4683"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.728658 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d40b985e-9817-453a-8a4b-72d7eadf4683-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "d40b985e-9817-453a-8a4b-72d7eadf4683" (UID: "d40b985e-9817-453a-8a4b-72d7eadf4683"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.740142 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40b985e-9817-453a-8a4b-72d7eadf4683-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "d40b985e-9817-453a-8a4b-72d7eadf4683" (UID: "d40b985e-9817-453a-8a4b-72d7eadf4683"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.762170 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "d40b985e-9817-453a-8a4b-72d7eadf4683" (UID: "d40b985e-9817-453a-8a4b-72d7eadf4683"). InnerVolumeSpecName "pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.767806 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40b985e-9817-453a-8a4b-72d7eadf4683-web-config" (OuterVolumeSpecName: "web-config") pod "d40b985e-9817-453a-8a4b-72d7eadf4683" (UID: "d40b985e-9817-453a-8a4b-72d7eadf4683"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.822717 4823 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d40b985e-9817-453a-8a4b-72d7eadf4683-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.822783 4823 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\") on node \"crc\" " Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.822799 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d40b985e-9817-453a-8a4b-72d7eadf4683-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.822812 4823 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d40b985e-9817-453a-8a4b-72d7eadf4683-web-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.822825 4823 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d40b985e-9817-453a-8a4b-72d7eadf4683-config-out\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.823031 4823 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d40b985e-9817-453a-8a4b-72d7eadf4683-tls-assets\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.823043 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phjxz\" (UniqueName: \"kubernetes.io/projected/d40b985e-9817-453a-8a4b-72d7eadf4683-kube-api-access-phjxz\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.823054 4823 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d40b985e-9817-453a-8a4b-72d7eadf4683-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.849744 4823 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.850440 4823 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8") on node "crc" Dec 06 06:45:11 crc kubenswrapper[4823]: I1206 06:45:11.924556 4823 reconciler_common.go:293] "Volume detached for volume \"pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.100302 4823 generic.go:334] "Generic (PLEG): container finished" podID="d40b985e-9817-453a-8a4b-72d7eadf4683" containerID="6fc3fc1bd5e402f68cccf2da3cce4281bf07a01590034a05387c118a061ebfb0" exitCode=0 Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.100389 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.100398 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d40b985e-9817-453a-8a4b-72d7eadf4683","Type":"ContainerDied","Data":"6fc3fc1bd5e402f68cccf2da3cce4281bf07a01590034a05387c118a061ebfb0"} Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.101550 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d40b985e-9817-453a-8a4b-72d7eadf4683","Type":"ContainerDied","Data":"4414f2038df659871e550ed821f8f39684a11c499553efafb3c3fe67f4c32eb7"} Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.101584 4823 scope.go:117] "RemoveContainer" containerID="6dbe018210f7e483f54f2d96c0d3677d204cd6cd433761c23a94289f98b2b04b" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.108777 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5","Type":"ContainerStarted","Data":"9dc97e8a5fa6bb3436785e373c6174fc5cbe4278675cb29f67cf3baf90a38e37"} Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.109007 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5","Type":"ContainerStarted","Data":"aca1b2c72a33a2725f686d28639f07e3bcf8446ddfaff0188f54f41e1e72b99c"} Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.128906 4823 scope.go:117] "RemoveContainer" containerID="6fc3fc1bd5e402f68cccf2da3cce4281bf07a01590034a05387c118a061ebfb0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.148128 4823 scope.go:117] "RemoveContainer" containerID="748e10f51e3bc3cbaaff2f08ca46a535d8d3bc3c5a5936d68015e5fac01bb798" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.161429 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.983330678 podStartE2EDuration="43.161406835s" podCreationTimestamp="2025-12-06 06:44:29 +0000 UTC" firstStartedPulling="2025-12-06 06:45:03.449083028 +0000 UTC m=+1204.734834988" lastFinishedPulling="2025-12-06 06:45:08.627159185 +0000 UTC m=+1209.912911145" observedRunningTime="2025-12-06 06:45:12.145996599 +0000 UTC m=+1213.431748569" watchObservedRunningTime="2025-12-06 06:45:12.161406835 +0000 UTC m=+1213.447158795" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.185846 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.186740 4823 scope.go:117] "RemoveContainer" containerID="0c4937a472ff3438ca571a336857409074a41ef08374ed5c4915823254918c6d" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.200022 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.229026 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.229219 4823 scope.go:117] "RemoveContainer" containerID="6dbe018210f7e483f54f2d96c0d3677d204cd6cd433761c23a94289f98b2b04b" Dec 06 06:45:12 crc kubenswrapper[4823]: E1206 06:45:12.229597 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40b985e-9817-453a-8a4b-72d7eadf4683" containerName="init-config-reloader" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.229627 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40b985e-9817-453a-8a4b-72d7eadf4683" containerName="init-config-reloader" Dec 06 06:45:12 crc kubenswrapper[4823]: E1206 06:45:12.229651 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52ebc36f-9460-42e9-a2e3-e6be86efaacb" containerName="collect-profiles" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.229680 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ebc36f-9460-42e9-a2e3-e6be86efaacb" containerName="collect-profiles" Dec 06 06:45:12 crc kubenswrapper[4823]: E1206 06:45:12.229707 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40b985e-9817-453a-8a4b-72d7eadf4683" containerName="prometheus" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.229716 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40b985e-9817-453a-8a4b-72d7eadf4683" containerName="prometheus" Dec 06 06:45:12 crc kubenswrapper[4823]: E1206 06:45:12.229730 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40b985e-9817-453a-8a4b-72d7eadf4683" containerName="thanos-sidecar" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.229737 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40b985e-9817-453a-8a4b-72d7eadf4683" containerName="thanos-sidecar" Dec 06 06:45:12 crc kubenswrapper[4823]: E1206 06:45:12.229748 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40b985e-9817-453a-8a4b-72d7eadf4683" containerName="config-reloader" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.229754 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40b985e-9817-453a-8a4b-72d7eadf4683" containerName="config-reloader" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.229955 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40b985e-9817-453a-8a4b-72d7eadf4683" containerName="thanos-sidecar" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.229977 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="52ebc36f-9460-42e9-a2e3-e6be86efaacb" containerName="collect-profiles" Dec 06 06:45:12 crc kubenswrapper[4823]: E1206 06:45:12.229969 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dbe018210f7e483f54f2d96c0d3677d204cd6cd433761c23a94289f98b2b04b\": container with ID starting with 6dbe018210f7e483f54f2d96c0d3677d204cd6cd433761c23a94289f98b2b04b not found: ID does not exist" containerID="6dbe018210f7e483f54f2d96c0d3677d204cd6cd433761c23a94289f98b2b04b" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.230012 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dbe018210f7e483f54f2d96c0d3677d204cd6cd433761c23a94289f98b2b04b"} err="failed to get container status \"6dbe018210f7e483f54f2d96c0d3677d204cd6cd433761c23a94289f98b2b04b\": rpc error: code = NotFound desc = could not find container \"6dbe018210f7e483f54f2d96c0d3677d204cd6cd433761c23a94289f98b2b04b\": container with ID starting with 6dbe018210f7e483f54f2d96c0d3677d204cd6cd433761c23a94289f98b2b04b not found: ID does not exist" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.230054 4823 scope.go:117] "RemoveContainer" containerID="6fc3fc1bd5e402f68cccf2da3cce4281bf07a01590034a05387c118a061ebfb0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.229991 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40b985e-9817-453a-8a4b-72d7eadf4683" containerName="config-reloader" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.230169 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40b985e-9817-453a-8a4b-72d7eadf4683" containerName="prometheus" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.232065 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: E1206 06:45:12.235973 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fc3fc1bd5e402f68cccf2da3cce4281bf07a01590034a05387c118a061ebfb0\": container with ID starting with 6fc3fc1bd5e402f68cccf2da3cce4281bf07a01590034a05387c118a061ebfb0 not found: ID does not exist" containerID="6fc3fc1bd5e402f68cccf2da3cce4281bf07a01590034a05387c118a061ebfb0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.236019 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fc3fc1bd5e402f68cccf2da3cce4281bf07a01590034a05387c118a061ebfb0"} err="failed to get container status \"6fc3fc1bd5e402f68cccf2da3cce4281bf07a01590034a05387c118a061ebfb0\": rpc error: code = NotFound desc = could not find container \"6fc3fc1bd5e402f68cccf2da3cce4281bf07a01590034a05387c118a061ebfb0\": container with ID starting with 6fc3fc1bd5e402f68cccf2da3cce4281bf07a01590034a05387c118a061ebfb0 not found: ID does not exist" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.236053 4823 scope.go:117] "RemoveContainer" containerID="748e10f51e3bc3cbaaff2f08ca46a535d8d3bc3c5a5936d68015e5fac01bb798" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.236109 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.236205 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.236301 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.236354 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.237007 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.237101 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-ts4xm" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.237683 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 06:45:12 crc kubenswrapper[4823]: E1206 06:45:12.238368 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"748e10f51e3bc3cbaaff2f08ca46a535d8d3bc3c5a5936d68015e5fac01bb798\": container with ID starting with 748e10f51e3bc3cbaaff2f08ca46a535d8d3bc3c5a5936d68015e5fac01bb798 not found: ID does not exist" containerID="748e10f51e3bc3cbaaff2f08ca46a535d8d3bc3c5a5936d68015e5fac01bb798" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.238408 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"748e10f51e3bc3cbaaff2f08ca46a535d8d3bc3c5a5936d68015e5fac01bb798"} err="failed to get container status \"748e10f51e3bc3cbaaff2f08ca46a535d8d3bc3c5a5936d68015e5fac01bb798\": rpc error: code = NotFound desc = could not find container \"748e10f51e3bc3cbaaff2f08ca46a535d8d3bc3c5a5936d68015e5fac01bb798\": container with ID starting with 748e10f51e3bc3cbaaff2f08ca46a535d8d3bc3c5a5936d68015e5fac01bb798 not found: ID does not exist" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.238434 4823 scope.go:117] "RemoveContainer" containerID="0c4937a472ff3438ca571a336857409074a41ef08374ed5c4915823254918c6d" Dec 06 06:45:12 crc kubenswrapper[4823]: E1206 06:45:12.239897 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c4937a472ff3438ca571a336857409074a41ef08374ed5c4915823254918c6d\": container with ID starting with 0c4937a472ff3438ca571a336857409074a41ef08374ed5c4915823254918c6d not found: ID does not exist" containerID="0c4937a472ff3438ca571a336857409074a41ef08374ed5c4915823254918c6d" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.239928 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c4937a472ff3438ca571a336857409074a41ef08374ed5c4915823254918c6d"} err="failed to get container status \"0c4937a472ff3438ca571a336857409074a41ef08374ed5c4915823254918c6d\": rpc error: code = NotFound desc = could not find container \"0c4937a472ff3438ca571a336857409074a41ef08374ed5c4915823254918c6d\": container with ID starting with 0c4937a472ff3438ca571a336857409074a41ef08374ed5c4915823254918c6d not found: ID does not exist" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.260353 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.333379 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.333431 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-config\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.333472 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.333538 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d605a044-9bcd-4e5f-a44f-71cf32706e46-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.333561 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.333685 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.333716 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d605a044-9bcd-4e5f-a44f-71cf32706e46-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.333800 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.333876 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d605a044-9bcd-4e5f-a44f-71cf32706e46-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.333909 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.334266 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n6t4\" (UniqueName: \"kubernetes.io/projected/d605a044-9bcd-4e5f-a44f-71cf32706e46-kube-api-access-6n6t4\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.435848 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n6t4\" (UniqueName: \"kubernetes.io/projected/d605a044-9bcd-4e5f-a44f-71cf32706e46-kube-api-access-6n6t4\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.435929 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.435958 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-config\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.435990 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.436012 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d605a044-9bcd-4e5f-a44f-71cf32706e46-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.436033 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.436070 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.436099 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d605a044-9bcd-4e5f-a44f-71cf32706e46-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.436138 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.436176 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d605a044-9bcd-4e5f-a44f-71cf32706e46-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.436209 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.437333 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d605a044-9bcd-4e5f-a44f-71cf32706e46-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.441437 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.441494 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d605a044-9bcd-4e5f-a44f-71cf32706e46-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.441606 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-config\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.442019 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.442180 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.442292 4823 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.442322 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fab8261b70d6f995dab453a667c3bae61bb90c651f0d61d1c06bd0698dff1b77/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.442499 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.443382 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.448799 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d605a044-9bcd-4e5f-a44f-71cf32706e46-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.458200 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n6t4\" (UniqueName: \"kubernetes.io/projected/d605a044-9bcd-4e5f-a44f-71cf32706e46-kube-api-access-6n6t4\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.481859 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\") pod \"prometheus-metric-storage-0\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.506415 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d97c8ddfc-xcbs2"] Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.507937 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.510275 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.538975 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-ovsdbserver-nb\") pod \"dnsmasq-dns-5d97c8ddfc-xcbs2\" (UID: \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\") " pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.539349 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-config\") pod \"dnsmasq-dns-5d97c8ddfc-xcbs2\" (UID: \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\") " pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.539456 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-ovsdbserver-sb\") pod \"dnsmasq-dns-5d97c8ddfc-xcbs2\" (UID: \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\") " pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.539570 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-dns-swift-storage-0\") pod \"dnsmasq-dns-5d97c8ddfc-xcbs2\" (UID: \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\") " pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.539767 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkdsf\" (UniqueName: \"kubernetes.io/projected/9fc050cb-5e23-4a27-85f6-d95f40e2e237-kube-api-access-zkdsf\") pod \"dnsmasq-dns-5d97c8ddfc-xcbs2\" (UID: \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\") " pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.539897 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-dns-svc\") pod \"dnsmasq-dns-5d97c8ddfc-xcbs2\" (UID: \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\") " pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.541358 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d97c8ddfc-xcbs2"] Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.579823 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.640971 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-ovsdbserver-nb\") pod \"dnsmasq-dns-5d97c8ddfc-xcbs2\" (UID: \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\") " pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.641030 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-config\") pod \"dnsmasq-dns-5d97c8ddfc-xcbs2\" (UID: \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\") " pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.641053 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-ovsdbserver-sb\") pod \"dnsmasq-dns-5d97c8ddfc-xcbs2\" (UID: \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\") " pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.641076 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-dns-swift-storage-0\") pod \"dnsmasq-dns-5d97c8ddfc-xcbs2\" (UID: \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\") " pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.641122 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkdsf\" (UniqueName: \"kubernetes.io/projected/9fc050cb-5e23-4a27-85f6-d95f40e2e237-kube-api-access-zkdsf\") pod \"dnsmasq-dns-5d97c8ddfc-xcbs2\" (UID: \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\") " pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.641145 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-dns-svc\") pod \"dnsmasq-dns-5d97c8ddfc-xcbs2\" (UID: \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\") " pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.642333 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-ovsdbserver-nb\") pod \"dnsmasq-dns-5d97c8ddfc-xcbs2\" (UID: \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\") " pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.642407 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-ovsdbserver-sb\") pod \"dnsmasq-dns-5d97c8ddfc-xcbs2\" (UID: \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\") " pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.643105 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-config\") pod \"dnsmasq-dns-5d97c8ddfc-xcbs2\" (UID: \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\") " pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.643184 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-dns-swift-storage-0\") pod \"dnsmasq-dns-5d97c8ddfc-xcbs2\" (UID: \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\") " pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.643345 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-dns-svc\") pod \"dnsmasq-dns-5d97c8ddfc-xcbs2\" (UID: \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\") " pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.663550 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkdsf\" (UniqueName: \"kubernetes.io/projected/9fc050cb-5e23-4a27-85f6-d95f40e2e237-kube-api-access-zkdsf\") pod \"dnsmasq-dns-5d97c8ddfc-xcbs2\" (UID: \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\") " pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" Dec 06 06:45:12 crc kubenswrapper[4823]: I1206 06:45:12.860515 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.041602 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.163028 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d40b985e-9817-453a-8a4b-72d7eadf4683" path="/var/lib/kubelet/pods/d40b985e-9817-453a-8a4b-72d7eadf4683/volumes" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.182096 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.232224 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.399716 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d97c8ddfc-xcbs2"] Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.457965 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-6x8x9"] Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.459381 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6x8x9" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.467544 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-6x8x9"] Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.553532 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-hq4mm"] Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.554845 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hq4mm" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.572570 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9h6k\" (UniqueName: \"kubernetes.io/projected/222d27f2-d83e-4213-b3f4-83dd6c5d14e7-kube-api-access-w9h6k\") pod \"cinder-db-create-6x8x9\" (UID: \"222d27f2-d83e-4213-b3f4-83dd6c5d14e7\") " pod="openstack/cinder-db-create-6x8x9" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.572976 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/222d27f2-d83e-4213-b3f4-83dd6c5d14e7-operator-scripts\") pod \"cinder-db-create-6x8x9\" (UID: \"222d27f2-d83e-4213-b3f4-83dd6c5d14e7\") " pod="openstack/cinder-db-create-6x8x9" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.574435 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-hq4mm"] Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.586031 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-7eb5-account-create-update-pdxbm"] Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.587476 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7eb5-account-create-update-pdxbm" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.590152 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.601548 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7eb5-account-create-update-pdxbm"] Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.663856 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-205f-account-create-update-fk4rx"] Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.665228 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-205f-account-create-update-fk4rx" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.671809 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.674826 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fca812df-30ed-47ad-9a3f-5fbb17d7032d-operator-scripts\") pod \"barbican-db-create-hq4mm\" (UID: \"fca812df-30ed-47ad-9a3f-5fbb17d7032d\") " pod="openstack/barbican-db-create-hq4mm" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.674908 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9a360db-ef02-428f-85fc-2470c362c39e-operator-scripts\") pod \"barbican-7eb5-account-create-update-pdxbm\" (UID: \"e9a360db-ef02-428f-85fc-2470c362c39e\") " pod="openstack/barbican-7eb5-account-create-update-pdxbm" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.674954 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6kgz\" (UniqueName: \"kubernetes.io/projected/fca812df-30ed-47ad-9a3f-5fbb17d7032d-kube-api-access-g6kgz\") pod \"barbican-db-create-hq4mm\" (UID: \"fca812df-30ed-47ad-9a3f-5fbb17d7032d\") " pod="openstack/barbican-db-create-hq4mm" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.675038 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/222d27f2-d83e-4213-b3f4-83dd6c5d14e7-operator-scripts\") pod \"cinder-db-create-6x8x9\" (UID: \"222d27f2-d83e-4213-b3f4-83dd6c5d14e7\") " pod="openstack/cinder-db-create-6x8x9" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.675105 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9h6k\" (UniqueName: \"kubernetes.io/projected/222d27f2-d83e-4213-b3f4-83dd6c5d14e7-kube-api-access-w9h6k\") pod \"cinder-db-create-6x8x9\" (UID: \"222d27f2-d83e-4213-b3f4-83dd6c5d14e7\") " pod="openstack/cinder-db-create-6x8x9" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.675180 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tpsj\" (UniqueName: \"kubernetes.io/projected/e9a360db-ef02-428f-85fc-2470c362c39e-kube-api-access-7tpsj\") pod \"barbican-7eb5-account-create-update-pdxbm\" (UID: \"e9a360db-ef02-428f-85fc-2470c362c39e\") " pod="openstack/barbican-7eb5-account-create-update-pdxbm" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.675905 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/222d27f2-d83e-4213-b3f4-83dd6c5d14e7-operator-scripts\") pod \"cinder-db-create-6x8x9\" (UID: \"222d27f2-d83e-4213-b3f4-83dd6c5d14e7\") " pod="openstack/cinder-db-create-6x8x9" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.685787 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-205f-account-create-update-fk4rx"] Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.724260 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9h6k\" (UniqueName: \"kubernetes.io/projected/222d27f2-d83e-4213-b3f4-83dd6c5d14e7-kube-api-access-w9h6k\") pod \"cinder-db-create-6x8x9\" (UID: \"222d27f2-d83e-4213-b3f4-83dd6c5d14e7\") " pod="openstack/cinder-db-create-6x8x9" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.777412 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tpsj\" (UniqueName: \"kubernetes.io/projected/e9a360db-ef02-428f-85fc-2470c362c39e-kube-api-access-7tpsj\") pod \"barbican-7eb5-account-create-update-pdxbm\" (UID: \"e9a360db-ef02-428f-85fc-2470c362c39e\") " pod="openstack/barbican-7eb5-account-create-update-pdxbm" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.777500 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94ad2bfb-9313-4afb-84aa-b42f108da314-operator-scripts\") pod \"cinder-205f-account-create-update-fk4rx\" (UID: \"94ad2bfb-9313-4afb-84aa-b42f108da314\") " pod="openstack/cinder-205f-account-create-update-fk4rx" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.777548 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fca812df-30ed-47ad-9a3f-5fbb17d7032d-operator-scripts\") pod \"barbican-db-create-hq4mm\" (UID: \"fca812df-30ed-47ad-9a3f-5fbb17d7032d\") " pod="openstack/barbican-db-create-hq4mm" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.777602 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9a360db-ef02-428f-85fc-2470c362c39e-operator-scripts\") pod \"barbican-7eb5-account-create-update-pdxbm\" (UID: \"e9a360db-ef02-428f-85fc-2470c362c39e\") " pod="openstack/barbican-7eb5-account-create-update-pdxbm" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.777678 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtg95\" (UniqueName: \"kubernetes.io/projected/94ad2bfb-9313-4afb-84aa-b42f108da314-kube-api-access-rtg95\") pod \"cinder-205f-account-create-update-fk4rx\" (UID: \"94ad2bfb-9313-4afb-84aa-b42f108da314\") " pod="openstack/cinder-205f-account-create-update-fk4rx" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.777710 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6kgz\" (UniqueName: \"kubernetes.io/projected/fca812df-30ed-47ad-9a3f-5fbb17d7032d-kube-api-access-g6kgz\") pod \"barbican-db-create-hq4mm\" (UID: \"fca812df-30ed-47ad-9a3f-5fbb17d7032d\") " pod="openstack/barbican-db-create-hq4mm" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.778516 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fca812df-30ed-47ad-9a3f-5fbb17d7032d-operator-scripts\") pod \"barbican-db-create-hq4mm\" (UID: \"fca812df-30ed-47ad-9a3f-5fbb17d7032d\") " pod="openstack/barbican-db-create-hq4mm" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.778653 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9a360db-ef02-428f-85fc-2470c362c39e-operator-scripts\") pod \"barbican-7eb5-account-create-update-pdxbm\" (UID: \"e9a360db-ef02-428f-85fc-2470c362c39e\") " pod="openstack/barbican-7eb5-account-create-update-pdxbm" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.783885 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6x8x9" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.807590 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6kgz\" (UniqueName: \"kubernetes.io/projected/fca812df-30ed-47ad-9a3f-5fbb17d7032d-kube-api-access-g6kgz\") pod \"barbican-db-create-hq4mm\" (UID: \"fca812df-30ed-47ad-9a3f-5fbb17d7032d\") " pod="openstack/barbican-db-create-hq4mm" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.807588 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tpsj\" (UniqueName: \"kubernetes.io/projected/e9a360db-ef02-428f-85fc-2470c362c39e-kube-api-access-7tpsj\") pod \"barbican-7eb5-account-create-update-pdxbm\" (UID: \"e9a360db-ef02-428f-85fc-2470c362c39e\") " pod="openstack/barbican-7eb5-account-create-update-pdxbm" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.831203 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-bll7z"] Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.833028 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bll7z" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.839192 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.839807 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.839867 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.839960 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5tv2h" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.849072 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-bll7z"] Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.878918 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hq4mm" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.880501 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94ad2bfb-9313-4afb-84aa-b42f108da314-operator-scripts\") pod \"cinder-205f-account-create-update-fk4rx\" (UID: \"94ad2bfb-9313-4afb-84aa-b42f108da314\") " pod="openstack/cinder-205f-account-create-update-fk4rx" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.880571 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtg95\" (UniqueName: \"kubernetes.io/projected/94ad2bfb-9313-4afb-84aa-b42f108da314-kube-api-access-rtg95\") pod \"cinder-205f-account-create-update-fk4rx\" (UID: \"94ad2bfb-9313-4afb-84aa-b42f108da314\") " pod="openstack/cinder-205f-account-create-update-fk4rx" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.881592 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94ad2bfb-9313-4afb-84aa-b42f108da314-operator-scripts\") pod \"cinder-205f-account-create-update-fk4rx\" (UID: \"94ad2bfb-9313-4afb-84aa-b42f108da314\") " pod="openstack/cinder-205f-account-create-update-fk4rx" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.902112 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtg95\" (UniqueName: \"kubernetes.io/projected/94ad2bfb-9313-4afb-84aa-b42f108da314-kube-api-access-rtg95\") pod \"cinder-205f-account-create-update-fk4rx\" (UID: \"94ad2bfb-9313-4afb-84aa-b42f108da314\") " pod="openstack/cinder-205f-account-create-update-fk4rx" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.920126 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7eb5-account-create-update-pdxbm" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.982950 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g77w\" (UniqueName: \"kubernetes.io/projected/ab7a2652-1281-4158-8bac-c547abce2fed-kube-api-access-8g77w\") pod \"keystone-db-sync-bll7z\" (UID: \"ab7a2652-1281-4158-8bac-c547abce2fed\") " pod="openstack/keystone-db-sync-bll7z" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.983039 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7a2652-1281-4158-8bac-c547abce2fed-combined-ca-bundle\") pod \"keystone-db-sync-bll7z\" (UID: \"ab7a2652-1281-4158-8bac-c547abce2fed\") " pod="openstack/keystone-db-sync-bll7z" Dec 06 06:45:13 crc kubenswrapper[4823]: I1206 06:45:13.983122 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7a2652-1281-4158-8bac-c547abce2fed-config-data\") pod \"keystone-db-sync-bll7z\" (UID: \"ab7a2652-1281-4158-8bac-c547abce2fed\") " pod="openstack/keystone-db-sync-bll7z" Dec 06 06:45:14 crc kubenswrapper[4823]: I1206 06:45:14.000726 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-205f-account-create-update-fk4rx" Dec 06 06:45:14 crc kubenswrapper[4823]: I1206 06:45:14.084613 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g77w\" (UniqueName: \"kubernetes.io/projected/ab7a2652-1281-4158-8bac-c547abce2fed-kube-api-access-8g77w\") pod \"keystone-db-sync-bll7z\" (UID: \"ab7a2652-1281-4158-8bac-c547abce2fed\") " pod="openstack/keystone-db-sync-bll7z" Dec 06 06:45:14 crc kubenswrapper[4823]: I1206 06:45:14.084728 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7a2652-1281-4158-8bac-c547abce2fed-combined-ca-bundle\") pod \"keystone-db-sync-bll7z\" (UID: \"ab7a2652-1281-4158-8bac-c547abce2fed\") " pod="openstack/keystone-db-sync-bll7z" Dec 06 06:45:14 crc kubenswrapper[4823]: I1206 06:45:14.084771 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7a2652-1281-4158-8bac-c547abce2fed-config-data\") pod \"keystone-db-sync-bll7z\" (UID: \"ab7a2652-1281-4158-8bac-c547abce2fed\") " pod="openstack/keystone-db-sync-bll7z" Dec 06 06:45:14 crc kubenswrapper[4823]: I1206 06:45:14.095381 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7a2652-1281-4158-8bac-c547abce2fed-combined-ca-bundle\") pod \"keystone-db-sync-bll7z\" (UID: \"ab7a2652-1281-4158-8bac-c547abce2fed\") " pod="openstack/keystone-db-sync-bll7z" Dec 06 06:45:14 crc kubenswrapper[4823]: I1206 06:45:14.097612 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7a2652-1281-4158-8bac-c547abce2fed-config-data\") pod \"keystone-db-sync-bll7z\" (UID: \"ab7a2652-1281-4158-8bac-c547abce2fed\") " pod="openstack/keystone-db-sync-bll7z" Dec 06 06:45:14 crc kubenswrapper[4823]: I1206 06:45:14.111947 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g77w\" (UniqueName: \"kubernetes.io/projected/ab7a2652-1281-4158-8bac-c547abce2fed-kube-api-access-8g77w\") pod \"keystone-db-sync-bll7z\" (UID: \"ab7a2652-1281-4158-8bac-c547abce2fed\") " pod="openstack/keystone-db-sync-bll7z" Dec 06 06:45:14 crc kubenswrapper[4823]: I1206 06:45:14.146078 4823 generic.go:334] "Generic (PLEG): container finished" podID="9fc050cb-5e23-4a27-85f6-d95f40e2e237" containerID="e8aead0ac1341b48036354257929d3e8549a7aec25d41df1a26718084e8b5420" exitCode=0 Dec 06 06:45:14 crc kubenswrapper[4823]: I1206 06:45:14.146192 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" event={"ID":"9fc050cb-5e23-4a27-85f6-d95f40e2e237","Type":"ContainerDied","Data":"e8aead0ac1341b48036354257929d3e8549a7aec25d41df1a26718084e8b5420"} Dec 06 06:45:14 crc kubenswrapper[4823]: I1206 06:45:14.146226 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" event={"ID":"9fc050cb-5e23-4a27-85f6-d95f40e2e237","Type":"ContainerStarted","Data":"cc43e1281be8b2e515ba7ca37fb7021afc51790d28a7b05a3beb446da78b36de"} Dec 06 06:45:14 crc kubenswrapper[4823]: I1206 06:45:14.150092 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d605a044-9bcd-4e5f-a44f-71cf32706e46","Type":"ContainerStarted","Data":"7ea7b06b4a3ec3689dbd8ad582634df7765f9f13a2155ba8aa3a07eedf484685"} Dec 06 06:45:14 crc kubenswrapper[4823]: I1206 06:45:14.243855 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-6x8x9"] Dec 06 06:45:14 crc kubenswrapper[4823]: I1206 06:45:14.386510 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bll7z" Dec 06 06:45:14 crc kubenswrapper[4823]: I1206 06:45:14.469479 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-notifications-server-0" Dec 06 06:45:14 crc kubenswrapper[4823]: I1206 06:45:14.521590 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-hq4mm"] Dec 06 06:45:14 crc kubenswrapper[4823]: I1206 06:45:14.601395 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7eb5-account-create-update-pdxbm"] Dec 06 06:45:14 crc kubenswrapper[4823]: I1206 06:45:14.612442 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-205f-account-create-update-fk4rx"] Dec 06 06:45:14 crc kubenswrapper[4823]: W1206 06:45:14.729762 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94ad2bfb_9313_4afb_84aa_b42f108da314.slice/crio-74a5aecb89564e533ec636707db262a1311705d19b8f5f068393e4276ac9d182 WatchSource:0}: Error finding container 74a5aecb89564e533ec636707db262a1311705d19b8f5f068393e4276ac9d182: Status 404 returned error can't find the container with id 74a5aecb89564e533ec636707db262a1311705d19b8f5f068393e4276ac9d182 Dec 06 06:45:14 crc kubenswrapper[4823]: I1206 06:45:14.855757 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-bll7z"] Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.197907 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-205f-account-create-update-fk4rx" event={"ID":"94ad2bfb-9313-4afb-84aa-b42f108da314","Type":"ContainerStarted","Data":"74a5aecb89564e533ec636707db262a1311705d19b8f5f068393e4276ac9d182"} Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.207385 4823 generic.go:334] "Generic (PLEG): container finished" podID="222d27f2-d83e-4213-b3f4-83dd6c5d14e7" containerID="51c5325568e84084ee1ffbade411a2e00087f50b058a992d46b0f2b2ce3d45af" exitCode=0 Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.207926 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6x8x9" event={"ID":"222d27f2-d83e-4213-b3f4-83dd6c5d14e7","Type":"ContainerDied","Data":"51c5325568e84084ee1ffbade411a2e00087f50b058a992d46b0f2b2ce3d45af"} Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.208002 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6x8x9" event={"ID":"222d27f2-d83e-4213-b3f4-83dd6c5d14e7","Type":"ContainerStarted","Data":"9caaa7e657192cfe3f227013b1ce6f759448e16f5317482db49a8319787d0521"} Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.218967 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7eb5-account-create-update-pdxbm" event={"ID":"e9a360db-ef02-428f-85fc-2470c362c39e","Type":"ContainerStarted","Data":"17d87c0438db93694a003226f0f578756e7c7b0a4f2d1eba2ac13287f3afdb21"} Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.221281 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" event={"ID":"9fc050cb-5e23-4a27-85f6-d95f40e2e237","Type":"ContainerStarted","Data":"e672a3e6715195b8c61670f17bff31231abe9a20bda53737bee843abfca289ca"} Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.221885 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.235113 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hq4mm" event={"ID":"fca812df-30ed-47ad-9a3f-5fbb17d7032d","Type":"ContainerStarted","Data":"30524077567ae87bcaec47e7b4175d67199aca26c76248b69ef70604bb32a422"} Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.235160 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hq4mm" event={"ID":"fca812df-30ed-47ad-9a3f-5fbb17d7032d","Type":"ContainerStarted","Data":"064307452da61cbb47b8df450960ef60583bd56b587570d26ad133c111bb5638"} Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.241349 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bll7z" event={"ID":"ab7a2652-1281-4158-8bac-c547abce2fed","Type":"ContainerStarted","Data":"2a570e76de7ecef71851c071fb80becc2c1042bf31df89987a9dac86485c6c6b"} Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.252892 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-7eb5-account-create-update-pdxbm" podStartSLOduration=2.2528737 podStartE2EDuration="2.2528737s" podCreationTimestamp="2025-12-06 06:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:45:15.241374547 +0000 UTC m=+1216.527126507" watchObservedRunningTime="2025-12-06 06:45:15.2528737 +0000 UTC m=+1216.538625660" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.283959 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" podStartSLOduration=3.283934899 podStartE2EDuration="3.283934899s" podCreationTimestamp="2025-12-06 06:45:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:45:15.271756766 +0000 UTC m=+1216.557508726" watchObservedRunningTime="2025-12-06 06:45:15.283934899 +0000 UTC m=+1216.569686859" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.303107 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-hq4mm" podStartSLOduration=2.3030887829999998 podStartE2EDuration="2.303088783s" podCreationTimestamp="2025-12-06 06:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:45:15.291640912 +0000 UTC m=+1216.577392872" watchObservedRunningTime="2025-12-06 06:45:15.303088783 +0000 UTC m=+1216.588840743" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.443223 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-26bwc"] Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.444602 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-26bwc" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.458566 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.458789 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-r6nnd" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.519764 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-26bwc"] Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.522578 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaeee530-df36-4fc7-96d5-b93755e8c4fe-combined-ca-bundle\") pod \"watcher-db-sync-26bwc\" (UID: \"aaeee530-df36-4fc7-96d5-b93755e8c4fe\") " pod="openstack/watcher-db-sync-26bwc" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.522620 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaeee530-df36-4fc7-96d5-b93755e8c4fe-config-data\") pod \"watcher-db-sync-26bwc\" (UID: \"aaeee530-df36-4fc7-96d5-b93755e8c4fe\") " pod="openstack/watcher-db-sync-26bwc" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.522655 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aaeee530-df36-4fc7-96d5-b93755e8c4fe-db-sync-config-data\") pod \"watcher-db-sync-26bwc\" (UID: \"aaeee530-df36-4fc7-96d5-b93755e8c4fe\") " pod="openstack/watcher-db-sync-26bwc" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.522732 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld7fj\" (UniqueName: \"kubernetes.io/projected/aaeee530-df36-4fc7-96d5-b93755e8c4fe-kube-api-access-ld7fj\") pod \"watcher-db-sync-26bwc\" (UID: \"aaeee530-df36-4fc7-96d5-b93755e8c4fe\") " pod="openstack/watcher-db-sync-26bwc" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.564069 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-dchbm"] Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.568587 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dchbm" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.588393 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-dchbm"] Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.621415 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-00ee-account-create-update-x65sw"] Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.623074 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-00ee-account-create-update-x65sw" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.625116 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gzj2\" (UniqueName: \"kubernetes.io/projected/98060624-4d67-42df-ba2a-5f70c05200c1-kube-api-access-7gzj2\") pod \"glance-db-create-dchbm\" (UID: \"98060624-4d67-42df-ba2a-5f70c05200c1\") " pod="openstack/glance-db-create-dchbm" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.625169 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaeee530-df36-4fc7-96d5-b93755e8c4fe-combined-ca-bundle\") pod \"watcher-db-sync-26bwc\" (UID: \"aaeee530-df36-4fc7-96d5-b93755e8c4fe\") " pod="openstack/watcher-db-sync-26bwc" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.625203 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaeee530-df36-4fc7-96d5-b93755e8c4fe-config-data\") pod \"watcher-db-sync-26bwc\" (UID: \"aaeee530-df36-4fc7-96d5-b93755e8c4fe\") " pod="openstack/watcher-db-sync-26bwc" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.625260 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aaeee530-df36-4fc7-96d5-b93755e8c4fe-db-sync-config-data\") pod \"watcher-db-sync-26bwc\" (UID: \"aaeee530-df36-4fc7-96d5-b93755e8c4fe\") " pod="openstack/watcher-db-sync-26bwc" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.625371 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld7fj\" (UniqueName: \"kubernetes.io/projected/aaeee530-df36-4fc7-96d5-b93755e8c4fe-kube-api-access-ld7fj\") pod \"watcher-db-sync-26bwc\" (UID: \"aaeee530-df36-4fc7-96d5-b93755e8c4fe\") " pod="openstack/watcher-db-sync-26bwc" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.625397 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98060624-4d67-42df-ba2a-5f70c05200c1-operator-scripts\") pod \"glance-db-create-dchbm\" (UID: \"98060624-4d67-42df-ba2a-5f70c05200c1\") " pod="openstack/glance-db-create-dchbm" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.626987 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.632851 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aaeee530-df36-4fc7-96d5-b93755e8c4fe-db-sync-config-data\") pod \"watcher-db-sync-26bwc\" (UID: \"aaeee530-df36-4fc7-96d5-b93755e8c4fe\") " pod="openstack/watcher-db-sync-26bwc" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.639795 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaeee530-df36-4fc7-96d5-b93755e8c4fe-config-data\") pod \"watcher-db-sync-26bwc\" (UID: \"aaeee530-df36-4fc7-96d5-b93755e8c4fe\") " pod="openstack/watcher-db-sync-26bwc" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.648741 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld7fj\" (UniqueName: \"kubernetes.io/projected/aaeee530-df36-4fc7-96d5-b93755e8c4fe-kube-api-access-ld7fj\") pod \"watcher-db-sync-26bwc\" (UID: \"aaeee530-df36-4fc7-96d5-b93755e8c4fe\") " pod="openstack/watcher-db-sync-26bwc" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.662290 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-00ee-account-create-update-x65sw"] Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.667006 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaeee530-df36-4fc7-96d5-b93755e8c4fe-combined-ca-bundle\") pod \"watcher-db-sync-26bwc\" (UID: \"aaeee530-df36-4fc7-96d5-b93755e8c4fe\") " pod="openstack/watcher-db-sync-26bwc" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.702649 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-clm65"] Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.704042 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-clm65" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.729971 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gzj2\" (UniqueName: \"kubernetes.io/projected/98060624-4d67-42df-ba2a-5f70c05200c1-kube-api-access-7gzj2\") pod \"glance-db-create-dchbm\" (UID: \"98060624-4d67-42df-ba2a-5f70c05200c1\") " pod="openstack/glance-db-create-dchbm" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.730076 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f2325c2-986c-4b21-b734-6e4a6b0c8199-operator-scripts\") pod \"glance-00ee-account-create-update-x65sw\" (UID: \"3f2325c2-986c-4b21-b734-6e4a6b0c8199\") " pod="openstack/glance-00ee-account-create-update-x65sw" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.730163 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrp7l\" (UniqueName: \"kubernetes.io/projected/3f2325c2-986c-4b21-b734-6e4a6b0c8199-kube-api-access-zrp7l\") pod \"glance-00ee-account-create-update-x65sw\" (UID: \"3f2325c2-986c-4b21-b734-6e4a6b0c8199\") " pod="openstack/glance-00ee-account-create-update-x65sw" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.730216 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98060624-4d67-42df-ba2a-5f70c05200c1-operator-scripts\") pod \"glance-db-create-dchbm\" (UID: \"98060624-4d67-42df-ba2a-5f70c05200c1\") " pod="openstack/glance-db-create-dchbm" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.731208 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98060624-4d67-42df-ba2a-5f70c05200c1-operator-scripts\") pod \"glance-db-create-dchbm\" (UID: \"98060624-4d67-42df-ba2a-5f70c05200c1\") " pod="openstack/glance-db-create-dchbm" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.731905 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-clm65"] Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.757454 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gzj2\" (UniqueName: \"kubernetes.io/projected/98060624-4d67-42df-ba2a-5f70c05200c1-kube-api-access-7gzj2\") pod \"glance-db-create-dchbm\" (UID: \"98060624-4d67-42df-ba2a-5f70c05200c1\") " pod="openstack/glance-db-create-dchbm" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.757560 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-0edd-account-create-update-n4hqb"] Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.761530 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0edd-account-create-update-n4hqb" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.765978 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.771884 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0edd-account-create-update-n4hqb"] Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.794702 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-26bwc" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.832158 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mgqp\" (UniqueName: \"kubernetes.io/projected/75d32988-ae9c-4b85-ac33-e847bebb88c9-kube-api-access-4mgqp\") pod \"neutron-0edd-account-create-update-n4hqb\" (UID: \"75d32988-ae9c-4b85-ac33-e847bebb88c9\") " pod="openstack/neutron-0edd-account-create-update-n4hqb" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.832225 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6566b\" (UniqueName: \"kubernetes.io/projected/06696730-e439-443b-a5f9-55ea9b90107f-kube-api-access-6566b\") pod \"neutron-db-create-clm65\" (UID: \"06696730-e439-443b-a5f9-55ea9b90107f\") " pod="openstack/neutron-db-create-clm65" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.832281 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f2325c2-986c-4b21-b734-6e4a6b0c8199-operator-scripts\") pod \"glance-00ee-account-create-update-x65sw\" (UID: \"3f2325c2-986c-4b21-b734-6e4a6b0c8199\") " pod="openstack/glance-00ee-account-create-update-x65sw" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.832360 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrp7l\" (UniqueName: \"kubernetes.io/projected/3f2325c2-986c-4b21-b734-6e4a6b0c8199-kube-api-access-zrp7l\") pod \"glance-00ee-account-create-update-x65sw\" (UID: \"3f2325c2-986c-4b21-b734-6e4a6b0c8199\") " pod="openstack/glance-00ee-account-create-update-x65sw" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.832391 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75d32988-ae9c-4b85-ac33-e847bebb88c9-operator-scripts\") pod \"neutron-0edd-account-create-update-n4hqb\" (UID: \"75d32988-ae9c-4b85-ac33-e847bebb88c9\") " pod="openstack/neutron-0edd-account-create-update-n4hqb" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.832428 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06696730-e439-443b-a5f9-55ea9b90107f-operator-scripts\") pod \"neutron-db-create-clm65\" (UID: \"06696730-e439-443b-a5f9-55ea9b90107f\") " pod="openstack/neutron-db-create-clm65" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.833268 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f2325c2-986c-4b21-b734-6e4a6b0c8199-operator-scripts\") pod \"glance-00ee-account-create-update-x65sw\" (UID: \"3f2325c2-986c-4b21-b734-6e4a6b0c8199\") " pod="openstack/glance-00ee-account-create-update-x65sw" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.909839 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dchbm" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.936785 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6566b\" (UniqueName: \"kubernetes.io/projected/06696730-e439-443b-a5f9-55ea9b90107f-kube-api-access-6566b\") pod \"neutron-db-create-clm65\" (UID: \"06696730-e439-443b-a5f9-55ea9b90107f\") " pod="openstack/neutron-db-create-clm65" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.936909 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75d32988-ae9c-4b85-ac33-e847bebb88c9-operator-scripts\") pod \"neutron-0edd-account-create-update-n4hqb\" (UID: \"75d32988-ae9c-4b85-ac33-e847bebb88c9\") " pod="openstack/neutron-0edd-account-create-update-n4hqb" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.936945 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06696730-e439-443b-a5f9-55ea9b90107f-operator-scripts\") pod \"neutron-db-create-clm65\" (UID: \"06696730-e439-443b-a5f9-55ea9b90107f\") " pod="openstack/neutron-db-create-clm65" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.936994 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mgqp\" (UniqueName: \"kubernetes.io/projected/75d32988-ae9c-4b85-ac33-e847bebb88c9-kube-api-access-4mgqp\") pod \"neutron-0edd-account-create-update-n4hqb\" (UID: \"75d32988-ae9c-4b85-ac33-e847bebb88c9\") " pod="openstack/neutron-0edd-account-create-update-n4hqb" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.938204 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75d32988-ae9c-4b85-ac33-e847bebb88c9-operator-scripts\") pod \"neutron-0edd-account-create-update-n4hqb\" (UID: \"75d32988-ae9c-4b85-ac33-e847bebb88c9\") " pod="openstack/neutron-0edd-account-create-update-n4hqb" Dec 06 06:45:15 crc kubenswrapper[4823]: I1206 06:45:15.938438 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06696730-e439-443b-a5f9-55ea9b90107f-operator-scripts\") pod \"neutron-db-create-clm65\" (UID: \"06696730-e439-443b-a5f9-55ea9b90107f\") " pod="openstack/neutron-db-create-clm65" Dec 06 06:45:16 crc kubenswrapper[4823]: I1206 06:45:16.021499 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6566b\" (UniqueName: \"kubernetes.io/projected/06696730-e439-443b-a5f9-55ea9b90107f-kube-api-access-6566b\") pod \"neutron-db-create-clm65\" (UID: \"06696730-e439-443b-a5f9-55ea9b90107f\") " pod="openstack/neutron-db-create-clm65" Dec 06 06:45:16 crc kubenswrapper[4823]: I1206 06:45:16.022994 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mgqp\" (UniqueName: \"kubernetes.io/projected/75d32988-ae9c-4b85-ac33-e847bebb88c9-kube-api-access-4mgqp\") pod \"neutron-0edd-account-create-update-n4hqb\" (UID: \"75d32988-ae9c-4b85-ac33-e847bebb88c9\") " pod="openstack/neutron-0edd-account-create-update-n4hqb" Dec 06 06:45:16 crc kubenswrapper[4823]: I1206 06:45:16.024720 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrp7l\" (UniqueName: \"kubernetes.io/projected/3f2325c2-986c-4b21-b734-6e4a6b0c8199-kube-api-access-zrp7l\") pod \"glance-00ee-account-create-update-x65sw\" (UID: \"3f2325c2-986c-4b21-b734-6e4a6b0c8199\") " pod="openstack/glance-00ee-account-create-update-x65sw" Dec 06 06:45:16 crc kubenswrapper[4823]: I1206 06:45:16.189548 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-00ee-account-create-update-x65sw" Dec 06 06:45:16 crc kubenswrapper[4823]: I1206 06:45:16.223421 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-clm65" Dec 06 06:45:16 crc kubenswrapper[4823]: I1206 06:45:16.264618 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0edd-account-create-update-n4hqb" Dec 06 06:45:16 crc kubenswrapper[4823]: I1206 06:45:16.288431 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7eb5-account-create-update-pdxbm" event={"ID":"e9a360db-ef02-428f-85fc-2470c362c39e","Type":"ContainerStarted","Data":"3c93202077f81c3eeeed3933b988983828e57bf8646ec2075f872ae4a90dbe46"} Dec 06 06:45:16 crc kubenswrapper[4823]: I1206 06:45:16.297504 4823 generic.go:334] "Generic (PLEG): container finished" podID="fca812df-30ed-47ad-9a3f-5fbb17d7032d" containerID="30524077567ae87bcaec47e7b4175d67199aca26c76248b69ef70604bb32a422" exitCode=0 Dec 06 06:45:16 crc kubenswrapper[4823]: I1206 06:45:16.298140 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hq4mm" event={"ID":"fca812df-30ed-47ad-9a3f-5fbb17d7032d","Type":"ContainerDied","Data":"30524077567ae87bcaec47e7b4175d67199aca26c76248b69ef70604bb32a422"} Dec 06 06:45:16 crc kubenswrapper[4823]: I1206 06:45:16.306475 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-205f-account-create-update-fk4rx" event={"ID":"94ad2bfb-9313-4afb-84aa-b42f108da314","Type":"ContainerStarted","Data":"49c00b0e0b09e0c060278189c9ac44edfd229cb509c2153cf29b9a32e0fb40c0"} Dec 06 06:45:16 crc kubenswrapper[4823]: I1206 06:45:16.365535 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-205f-account-create-update-fk4rx" podStartSLOduration=3.365514533 podStartE2EDuration="3.365514533s" podCreationTimestamp="2025-12-06 06:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:45:16.362134155 +0000 UTC m=+1217.647886135" watchObservedRunningTime="2025-12-06 06:45:16.365514533 +0000 UTC m=+1217.651266493" Dec 06 06:45:16 crc kubenswrapper[4823]: I1206 06:45:16.655926 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-dchbm"] Dec 06 06:45:16 crc kubenswrapper[4823]: W1206 06:45:16.666932 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98060624_4d67_42df_ba2a_5f70c05200c1.slice/crio-093780bd5cbe2ef1a6a4e0459d4ec0b35b62ed64440424ee71c783ede33f22eb WatchSource:0}: Error finding container 093780bd5cbe2ef1a6a4e0459d4ec0b35b62ed64440424ee71c783ede33f22eb: Status 404 returned error can't find the container with id 093780bd5cbe2ef1a6a4e0459d4ec0b35b62ed64440424ee71c783ede33f22eb Dec 06 06:45:16 crc kubenswrapper[4823]: I1206 06:45:16.860272 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-26bwc"] Dec 06 06:45:16 crc kubenswrapper[4823]: I1206 06:45:16.874933 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6x8x9" Dec 06 06:45:16 crc kubenswrapper[4823]: I1206 06:45:16.963100 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9h6k\" (UniqueName: \"kubernetes.io/projected/222d27f2-d83e-4213-b3f4-83dd6c5d14e7-kube-api-access-w9h6k\") pod \"222d27f2-d83e-4213-b3f4-83dd6c5d14e7\" (UID: \"222d27f2-d83e-4213-b3f4-83dd6c5d14e7\") " Dec 06 06:45:16 crc kubenswrapper[4823]: I1206 06:45:16.963198 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/222d27f2-d83e-4213-b3f4-83dd6c5d14e7-operator-scripts\") pod \"222d27f2-d83e-4213-b3f4-83dd6c5d14e7\" (UID: \"222d27f2-d83e-4213-b3f4-83dd6c5d14e7\") " Dec 06 06:45:16 crc kubenswrapper[4823]: I1206 06:45:16.963955 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/222d27f2-d83e-4213-b3f4-83dd6c5d14e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "222d27f2-d83e-4213-b3f4-83dd6c5d14e7" (UID: "222d27f2-d83e-4213-b3f4-83dd6c5d14e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:45:16 crc kubenswrapper[4823]: I1206 06:45:16.965229 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/222d27f2-d83e-4213-b3f4-83dd6c5d14e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:16 crc kubenswrapper[4823]: I1206 06:45:16.973283 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/222d27f2-d83e-4213-b3f4-83dd6c5d14e7-kube-api-access-w9h6k" (OuterVolumeSpecName: "kube-api-access-w9h6k") pod "222d27f2-d83e-4213-b3f4-83dd6c5d14e7" (UID: "222d27f2-d83e-4213-b3f4-83dd6c5d14e7"). InnerVolumeSpecName "kube-api-access-w9h6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:45:17 crc kubenswrapper[4823]: I1206 06:45:17.066368 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9h6k\" (UniqueName: \"kubernetes.io/projected/222d27f2-d83e-4213-b3f4-83dd6c5d14e7-kube-api-access-w9h6k\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:17 crc kubenswrapper[4823]: I1206 06:45:17.104559 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0edd-account-create-update-n4hqb"] Dec 06 06:45:17 crc kubenswrapper[4823]: I1206 06:45:17.203009 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-00ee-account-create-update-x65sw"] Dec 06 06:45:17 crc kubenswrapper[4823]: I1206 06:45:17.203326 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-clm65"] Dec 06 06:45:17 crc kubenswrapper[4823]: I1206 06:45:17.321328 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6x8x9" event={"ID":"222d27f2-d83e-4213-b3f4-83dd6c5d14e7","Type":"ContainerDied","Data":"9caaa7e657192cfe3f227013b1ce6f759448e16f5317482db49a8319787d0521"} Dec 06 06:45:17 crc kubenswrapper[4823]: I1206 06:45:17.321370 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9caaa7e657192cfe3f227013b1ce6f759448e16f5317482db49a8319787d0521" Dec 06 06:45:17 crc kubenswrapper[4823]: I1206 06:45:17.321458 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6x8x9" Dec 06 06:45:17 crc kubenswrapper[4823]: I1206 06:45:17.329828 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-26bwc" event={"ID":"aaeee530-df36-4fc7-96d5-b93755e8c4fe","Type":"ContainerStarted","Data":"f6e7d040377344c80de637ddec936c026b2b2bc0500e0884b5ff776cbbb11864"} Dec 06 06:45:17 crc kubenswrapper[4823]: I1206 06:45:17.338132 4823 generic.go:334] "Generic (PLEG): container finished" podID="e9a360db-ef02-428f-85fc-2470c362c39e" containerID="3c93202077f81c3eeeed3933b988983828e57bf8646ec2075f872ae4a90dbe46" exitCode=0 Dec 06 06:45:17 crc kubenswrapper[4823]: I1206 06:45:17.338822 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7eb5-account-create-update-pdxbm" event={"ID":"e9a360db-ef02-428f-85fc-2470c362c39e","Type":"ContainerDied","Data":"3c93202077f81c3eeeed3933b988983828e57bf8646ec2075f872ae4a90dbe46"} Dec 06 06:45:17 crc kubenswrapper[4823]: I1206 06:45:17.346841 4823 generic.go:334] "Generic (PLEG): container finished" podID="94ad2bfb-9313-4afb-84aa-b42f108da314" containerID="49c00b0e0b09e0c060278189c9ac44edfd229cb509c2153cf29b9a32e0fb40c0" exitCode=0 Dec 06 06:45:17 crc kubenswrapper[4823]: I1206 06:45:17.346973 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-205f-account-create-update-fk4rx" event={"ID":"94ad2bfb-9313-4afb-84aa-b42f108da314","Type":"ContainerDied","Data":"49c00b0e0b09e0c060278189c9ac44edfd229cb509c2153cf29b9a32e0fb40c0"} Dec 06 06:45:17 crc kubenswrapper[4823]: I1206 06:45:17.349561 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-00ee-account-create-update-x65sw" event={"ID":"3f2325c2-986c-4b21-b734-6e4a6b0c8199","Type":"ContainerStarted","Data":"e313774db51564db0bae1d50028abab843cce2c50e9fe194916d79ef3fc2390a"} Dec 06 06:45:17 crc kubenswrapper[4823]: I1206 06:45:17.355952 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-clm65" event={"ID":"06696730-e439-443b-a5f9-55ea9b90107f","Type":"ContainerStarted","Data":"4bf03346a9e0597368e59dae2cc5f05e1e2b7ab6931d8a10985f9e6882a409c4"} Dec 06 06:45:17 crc kubenswrapper[4823]: I1206 06:45:17.360998 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0edd-account-create-update-n4hqb" event={"ID":"75d32988-ae9c-4b85-ac33-e847bebb88c9","Type":"ContainerStarted","Data":"b48ab484eaf3dc91e5defa434d329c16c3f3e3b28fd36c854b5c1cc5946ea40f"} Dec 06 06:45:17 crc kubenswrapper[4823]: I1206 06:45:17.366042 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d605a044-9bcd-4e5f-a44f-71cf32706e46","Type":"ContainerStarted","Data":"d0d20704dc8ce42702280f5073226c84b5f1ae3912b827653430f48763c24237"} Dec 06 06:45:17 crc kubenswrapper[4823]: I1206 06:45:17.378831 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dchbm" event={"ID":"98060624-4d67-42df-ba2a-5f70c05200c1","Type":"ContainerStarted","Data":"f97d97e5393744c22f11951867f76336de382347ec9e67ab7fff7f1a68190418"} Dec 06 06:45:17 crc kubenswrapper[4823]: I1206 06:45:17.378899 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dchbm" event={"ID":"98060624-4d67-42df-ba2a-5f70c05200c1","Type":"ContainerStarted","Data":"093780bd5cbe2ef1a6a4e0459d4ec0b35b62ed64440424ee71c783ede33f22eb"} Dec 06 06:45:17 crc kubenswrapper[4823]: I1206 06:45:17.436369 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-dchbm" podStartSLOduration=2.436329975 podStartE2EDuration="2.436329975s" podCreationTimestamp="2025-12-06 06:45:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:45:17.429265861 +0000 UTC m=+1218.715017821" watchObservedRunningTime="2025-12-06 06:45:17.436329975 +0000 UTC m=+1218.722081935" Dec 06 06:45:17 crc kubenswrapper[4823]: I1206 06:45:17.734739 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hq4mm" Dec 06 06:45:17 crc kubenswrapper[4823]: I1206 06:45:17.784901 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6kgz\" (UniqueName: \"kubernetes.io/projected/fca812df-30ed-47ad-9a3f-5fbb17d7032d-kube-api-access-g6kgz\") pod \"fca812df-30ed-47ad-9a3f-5fbb17d7032d\" (UID: \"fca812df-30ed-47ad-9a3f-5fbb17d7032d\") " Dec 06 06:45:17 crc kubenswrapper[4823]: I1206 06:45:17.785014 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fca812df-30ed-47ad-9a3f-5fbb17d7032d-operator-scripts\") pod \"fca812df-30ed-47ad-9a3f-5fbb17d7032d\" (UID: \"fca812df-30ed-47ad-9a3f-5fbb17d7032d\") " Dec 06 06:45:17 crc kubenswrapper[4823]: I1206 06:45:17.785999 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fca812df-30ed-47ad-9a3f-5fbb17d7032d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fca812df-30ed-47ad-9a3f-5fbb17d7032d" (UID: "fca812df-30ed-47ad-9a3f-5fbb17d7032d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:45:17 crc kubenswrapper[4823]: I1206 06:45:17.793079 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fca812df-30ed-47ad-9a3f-5fbb17d7032d-kube-api-access-g6kgz" (OuterVolumeSpecName: "kube-api-access-g6kgz") pod "fca812df-30ed-47ad-9a3f-5fbb17d7032d" (UID: "fca812df-30ed-47ad-9a3f-5fbb17d7032d"). InnerVolumeSpecName "kube-api-access-g6kgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:45:17 crc kubenswrapper[4823]: I1206 06:45:17.886852 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6kgz\" (UniqueName: \"kubernetes.io/projected/fca812df-30ed-47ad-9a3f-5fbb17d7032d-kube-api-access-g6kgz\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:17 crc kubenswrapper[4823]: I1206 06:45:17.886903 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fca812df-30ed-47ad-9a3f-5fbb17d7032d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:18 crc kubenswrapper[4823]: I1206 06:45:18.390896 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0edd-account-create-update-n4hqb" event={"ID":"75d32988-ae9c-4b85-ac33-e847bebb88c9","Type":"ContainerDied","Data":"952e122e5acfa4a9c40450f07a21851624a9f337c5fadeee05cfbdd6e8b2679e"} Dec 06 06:45:18 crc kubenswrapper[4823]: I1206 06:45:18.390723 4823 generic.go:334] "Generic (PLEG): container finished" podID="75d32988-ae9c-4b85-ac33-e847bebb88c9" containerID="952e122e5acfa4a9c40450f07a21851624a9f337c5fadeee05cfbdd6e8b2679e" exitCode=0 Dec 06 06:45:18 crc kubenswrapper[4823]: I1206 06:45:18.396867 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hq4mm" Dec 06 06:45:18 crc kubenswrapper[4823]: I1206 06:45:18.397461 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hq4mm" event={"ID":"fca812df-30ed-47ad-9a3f-5fbb17d7032d","Type":"ContainerDied","Data":"064307452da61cbb47b8df450960ef60583bd56b587570d26ad133c111bb5638"} Dec 06 06:45:18 crc kubenswrapper[4823]: I1206 06:45:18.397514 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="064307452da61cbb47b8df450960ef60583bd56b587570d26ad133c111bb5638" Dec 06 06:45:18 crc kubenswrapper[4823]: I1206 06:45:18.400020 4823 generic.go:334] "Generic (PLEG): container finished" podID="3f2325c2-986c-4b21-b734-6e4a6b0c8199" containerID="dd032ea1b34c20c26087e9ca7492ca3cb722ba5c9b47af1dafdf20ece180f4f6" exitCode=0 Dec 06 06:45:18 crc kubenswrapper[4823]: I1206 06:45:18.400097 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-00ee-account-create-update-x65sw" event={"ID":"3f2325c2-986c-4b21-b734-6e4a6b0c8199","Type":"ContainerDied","Data":"dd032ea1b34c20c26087e9ca7492ca3cb722ba5c9b47af1dafdf20ece180f4f6"} Dec 06 06:45:18 crc kubenswrapper[4823]: I1206 06:45:18.404775 4823 generic.go:334] "Generic (PLEG): container finished" podID="98060624-4d67-42df-ba2a-5f70c05200c1" containerID="f97d97e5393744c22f11951867f76336de382347ec9e67ab7fff7f1a68190418" exitCode=0 Dec 06 06:45:18 crc kubenswrapper[4823]: I1206 06:45:18.404876 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dchbm" event={"ID":"98060624-4d67-42df-ba2a-5f70c05200c1","Type":"ContainerDied","Data":"f97d97e5393744c22f11951867f76336de382347ec9e67ab7fff7f1a68190418"} Dec 06 06:45:18 crc kubenswrapper[4823]: I1206 06:45:18.410785 4823 generic.go:334] "Generic (PLEG): container finished" podID="06696730-e439-443b-a5f9-55ea9b90107f" containerID="6307f4bcacfe8cdbf727c30f50d21ea17b715cafdf4ce2ceb913a6affe48dacd" exitCode=0 Dec 06 06:45:18 crc kubenswrapper[4823]: I1206 06:45:18.410972 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-clm65" event={"ID":"06696730-e439-443b-a5f9-55ea9b90107f","Type":"ContainerDied","Data":"6307f4bcacfe8cdbf727c30f50d21ea17b715cafdf4ce2ceb913a6affe48dacd"} Dec 06 06:45:18 crc kubenswrapper[4823]: I1206 06:45:18.895328 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-205f-account-create-update-fk4rx" Dec 06 06:45:18 crc kubenswrapper[4823]: I1206 06:45:18.899345 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7eb5-account-create-update-pdxbm" Dec 06 06:45:19 crc kubenswrapper[4823]: I1206 06:45:19.019644 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tpsj\" (UniqueName: \"kubernetes.io/projected/e9a360db-ef02-428f-85fc-2470c362c39e-kube-api-access-7tpsj\") pod \"e9a360db-ef02-428f-85fc-2470c362c39e\" (UID: \"e9a360db-ef02-428f-85fc-2470c362c39e\") " Dec 06 06:45:19 crc kubenswrapper[4823]: I1206 06:45:19.020040 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtg95\" (UniqueName: \"kubernetes.io/projected/94ad2bfb-9313-4afb-84aa-b42f108da314-kube-api-access-rtg95\") pod \"94ad2bfb-9313-4afb-84aa-b42f108da314\" (UID: \"94ad2bfb-9313-4afb-84aa-b42f108da314\") " Dec 06 06:45:19 crc kubenswrapper[4823]: I1206 06:45:19.020090 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9a360db-ef02-428f-85fc-2470c362c39e-operator-scripts\") pod \"e9a360db-ef02-428f-85fc-2470c362c39e\" (UID: \"e9a360db-ef02-428f-85fc-2470c362c39e\") " Dec 06 06:45:19 crc kubenswrapper[4823]: I1206 06:45:19.020282 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94ad2bfb-9313-4afb-84aa-b42f108da314-operator-scripts\") pod \"94ad2bfb-9313-4afb-84aa-b42f108da314\" (UID: \"94ad2bfb-9313-4afb-84aa-b42f108da314\") " Dec 06 06:45:19 crc kubenswrapper[4823]: I1206 06:45:19.020814 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9a360db-ef02-428f-85fc-2470c362c39e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e9a360db-ef02-428f-85fc-2470c362c39e" (UID: "e9a360db-ef02-428f-85fc-2470c362c39e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:45:19 crc kubenswrapper[4823]: I1206 06:45:19.020852 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94ad2bfb-9313-4afb-84aa-b42f108da314-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94ad2bfb-9313-4afb-84aa-b42f108da314" (UID: "94ad2bfb-9313-4afb-84aa-b42f108da314"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:45:19 crc kubenswrapper[4823]: I1206 06:45:19.021353 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9a360db-ef02-428f-85fc-2470c362c39e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:19 crc kubenswrapper[4823]: I1206 06:45:19.021372 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94ad2bfb-9313-4afb-84aa-b42f108da314-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:19 crc kubenswrapper[4823]: I1206 06:45:19.027012 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94ad2bfb-9313-4afb-84aa-b42f108da314-kube-api-access-rtg95" (OuterVolumeSpecName: "kube-api-access-rtg95") pod "94ad2bfb-9313-4afb-84aa-b42f108da314" (UID: "94ad2bfb-9313-4afb-84aa-b42f108da314"). InnerVolumeSpecName "kube-api-access-rtg95". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:45:19 crc kubenswrapper[4823]: I1206 06:45:19.027560 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9a360db-ef02-428f-85fc-2470c362c39e-kube-api-access-7tpsj" (OuterVolumeSpecName: "kube-api-access-7tpsj") pod "e9a360db-ef02-428f-85fc-2470c362c39e" (UID: "e9a360db-ef02-428f-85fc-2470c362c39e"). InnerVolumeSpecName "kube-api-access-7tpsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:45:19 crc kubenswrapper[4823]: I1206 06:45:19.123076 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtg95\" (UniqueName: \"kubernetes.io/projected/94ad2bfb-9313-4afb-84aa-b42f108da314-kube-api-access-rtg95\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:19 crc kubenswrapper[4823]: I1206 06:45:19.123110 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tpsj\" (UniqueName: \"kubernetes.io/projected/e9a360db-ef02-428f-85fc-2470c362c39e-kube-api-access-7tpsj\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:19 crc kubenswrapper[4823]: I1206 06:45:19.437236 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7eb5-account-create-update-pdxbm" Dec 06 06:45:19 crc kubenswrapper[4823]: I1206 06:45:19.437245 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7eb5-account-create-update-pdxbm" event={"ID":"e9a360db-ef02-428f-85fc-2470c362c39e","Type":"ContainerDied","Data":"17d87c0438db93694a003226f0f578756e7c7b0a4f2d1eba2ac13287f3afdb21"} Dec 06 06:45:19 crc kubenswrapper[4823]: I1206 06:45:19.437335 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17d87c0438db93694a003226f0f578756e7c7b0a4f2d1eba2ac13287f3afdb21" Dec 06 06:45:19 crc kubenswrapper[4823]: I1206 06:45:19.441653 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-205f-account-create-update-fk4rx" event={"ID":"94ad2bfb-9313-4afb-84aa-b42f108da314","Type":"ContainerDied","Data":"74a5aecb89564e533ec636707db262a1311705d19b8f5f068393e4276ac9d182"} Dec 06 06:45:19 crc kubenswrapper[4823]: I1206 06:45:19.441817 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74a5aecb89564e533ec636707db262a1311705d19b8f5f068393e4276ac9d182" Dec 06 06:45:19 crc kubenswrapper[4823]: I1206 06:45:19.441962 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-205f-account-create-update-fk4rx" Dec 06 06:45:22 crc kubenswrapper[4823]: I1206 06:45:22.862977 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" Dec 06 06:45:22 crc kubenswrapper[4823]: I1206 06:45:22.926851 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68b6cd6f45-wj5b8"] Dec 06 06:45:22 crc kubenswrapper[4823]: I1206 06:45:22.927120 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" podUID="eabecdc3-1a42-4340-9bf7-6fccb70224b3" containerName="dnsmasq-dns" containerID="cri-o://5a97807bff5e3069d6d2d75d09f91f2fbe7f8be3db4517bf690aba8a4f10974a" gracePeriod=10 Dec 06 06:45:24 crc kubenswrapper[4823]: I1206 06:45:24.591811 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-00ee-account-create-update-x65sw" Dec 06 06:45:24 crc kubenswrapper[4823]: I1206 06:45:24.640615 4823 generic.go:334] "Generic (PLEG): container finished" podID="eabecdc3-1a42-4340-9bf7-6fccb70224b3" containerID="5a97807bff5e3069d6d2d75d09f91f2fbe7f8be3db4517bf690aba8a4f10974a" exitCode=0 Dec 06 06:45:24 crc kubenswrapper[4823]: I1206 06:45:24.640723 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" event={"ID":"eabecdc3-1a42-4340-9bf7-6fccb70224b3","Type":"ContainerDied","Data":"5a97807bff5e3069d6d2d75d09f91f2fbe7f8be3db4517bf690aba8a4f10974a"} Dec 06 06:45:24 crc kubenswrapper[4823]: I1206 06:45:24.650393 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0edd-account-create-update-n4hqb" Dec 06 06:45:24 crc kubenswrapper[4823]: I1206 06:45:24.654351 4823 generic.go:334] "Generic (PLEG): container finished" podID="d605a044-9bcd-4e5f-a44f-71cf32706e46" containerID="d0d20704dc8ce42702280f5073226c84b5f1ae3912b827653430f48763c24237" exitCode=0 Dec 06 06:45:24 crc kubenswrapper[4823]: I1206 06:45:24.654426 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d605a044-9bcd-4e5f-a44f-71cf32706e46","Type":"ContainerDied","Data":"d0d20704dc8ce42702280f5073226c84b5f1ae3912b827653430f48763c24237"} Dec 06 06:45:24 crc kubenswrapper[4823]: I1206 06:45:24.663364 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-00ee-account-create-update-x65sw" event={"ID":"3f2325c2-986c-4b21-b734-6e4a6b0c8199","Type":"ContainerDied","Data":"e313774db51564db0bae1d50028abab843cce2c50e9fe194916d79ef3fc2390a"} Dec 06 06:45:24 crc kubenswrapper[4823]: I1206 06:45:24.663418 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e313774db51564db0bae1d50028abab843cce2c50e9fe194916d79ef3fc2390a" Dec 06 06:45:24 crc kubenswrapper[4823]: I1206 06:45:24.663483 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-00ee-account-create-update-x65sw" Dec 06 06:45:24 crc kubenswrapper[4823]: I1206 06:45:24.690119 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0edd-account-create-update-n4hqb" event={"ID":"75d32988-ae9c-4b85-ac33-e847bebb88c9","Type":"ContainerDied","Data":"b48ab484eaf3dc91e5defa434d329c16c3f3e3b28fd36c854b5c1cc5946ea40f"} Dec 06 06:45:24 crc kubenswrapper[4823]: I1206 06:45:24.690413 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b48ab484eaf3dc91e5defa434d329c16c3f3e3b28fd36c854b5c1cc5946ea40f" Dec 06 06:45:24 crc kubenswrapper[4823]: I1206 06:45:24.690541 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0edd-account-create-update-n4hqb" Dec 06 06:45:24 crc kubenswrapper[4823]: I1206 06:45:24.751681 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrp7l\" (UniqueName: \"kubernetes.io/projected/3f2325c2-986c-4b21-b734-6e4a6b0c8199-kube-api-access-zrp7l\") pod \"3f2325c2-986c-4b21-b734-6e4a6b0c8199\" (UID: \"3f2325c2-986c-4b21-b734-6e4a6b0c8199\") " Dec 06 06:45:24 crc kubenswrapper[4823]: I1206 06:45:24.752042 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f2325c2-986c-4b21-b734-6e4a6b0c8199-operator-scripts\") pod \"3f2325c2-986c-4b21-b734-6e4a6b0c8199\" (UID: \"3f2325c2-986c-4b21-b734-6e4a6b0c8199\") " Dec 06 06:45:24 crc kubenswrapper[4823]: I1206 06:45:24.752535 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f2325c2-986c-4b21-b734-6e4a6b0c8199-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3f2325c2-986c-4b21-b734-6e4a6b0c8199" (UID: "3f2325c2-986c-4b21-b734-6e4a6b0c8199"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:45:24 crc kubenswrapper[4823]: I1206 06:45:24.759682 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f2325c2-986c-4b21-b734-6e4a6b0c8199-kube-api-access-zrp7l" (OuterVolumeSpecName: "kube-api-access-zrp7l") pod "3f2325c2-986c-4b21-b734-6e4a6b0c8199" (UID: "3f2325c2-986c-4b21-b734-6e4a6b0c8199"). InnerVolumeSpecName "kube-api-access-zrp7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:45:24 crc kubenswrapper[4823]: I1206 06:45:24.853677 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75d32988-ae9c-4b85-ac33-e847bebb88c9-operator-scripts\") pod \"75d32988-ae9c-4b85-ac33-e847bebb88c9\" (UID: \"75d32988-ae9c-4b85-ac33-e847bebb88c9\") " Dec 06 06:45:24 crc kubenswrapper[4823]: I1206 06:45:24.853839 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mgqp\" (UniqueName: \"kubernetes.io/projected/75d32988-ae9c-4b85-ac33-e847bebb88c9-kube-api-access-4mgqp\") pod \"75d32988-ae9c-4b85-ac33-e847bebb88c9\" (UID: \"75d32988-ae9c-4b85-ac33-e847bebb88c9\") " Dec 06 06:45:24 crc kubenswrapper[4823]: I1206 06:45:24.854221 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75d32988-ae9c-4b85-ac33-e847bebb88c9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "75d32988-ae9c-4b85-ac33-e847bebb88c9" (UID: "75d32988-ae9c-4b85-ac33-e847bebb88c9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:45:24 crc kubenswrapper[4823]: I1206 06:45:24.854786 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75d32988-ae9c-4b85-ac33-e847bebb88c9-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:24 crc kubenswrapper[4823]: I1206 06:45:24.854812 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrp7l\" (UniqueName: \"kubernetes.io/projected/3f2325c2-986c-4b21-b734-6e4a6b0c8199-kube-api-access-zrp7l\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:24 crc kubenswrapper[4823]: I1206 06:45:24.854829 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f2325c2-986c-4b21-b734-6e4a6b0c8199-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:24 crc kubenswrapper[4823]: I1206 06:45:24.858933 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75d32988-ae9c-4b85-ac33-e847bebb88c9-kube-api-access-4mgqp" (OuterVolumeSpecName: "kube-api-access-4mgqp") pod "75d32988-ae9c-4b85-ac33-e847bebb88c9" (UID: "75d32988-ae9c-4b85-ac33-e847bebb88c9"). InnerVolumeSpecName "kube-api-access-4mgqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:45:24 crc kubenswrapper[4823]: I1206 06:45:24.956146 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mgqp\" (UniqueName: \"kubernetes.io/projected/75d32988-ae9c-4b85-ac33-e847bebb88c9-kube-api-access-4mgqp\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:29 crc kubenswrapper[4823]: I1206 06:45:29.738819 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-clm65" event={"ID":"06696730-e439-443b-a5f9-55ea9b90107f","Type":"ContainerDied","Data":"4bf03346a9e0597368e59dae2cc5f05e1e2b7ab6931d8a10985f9e6882a409c4"} Dec 06 06:45:29 crc kubenswrapper[4823]: I1206 06:45:29.739488 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bf03346a9e0597368e59dae2cc5f05e1e2b7ab6931d8a10985f9e6882a409c4" Dec 06 06:45:29 crc kubenswrapper[4823]: I1206 06:45:29.813862 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-clm65" Dec 06 06:45:29 crc kubenswrapper[4823]: I1206 06:45:29.849916 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" podUID="eabecdc3-1a42-4340-9bf7-6fccb70224b3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: i/o timeout" Dec 06 06:45:29 crc kubenswrapper[4823]: I1206 06:45:29.926061 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06696730-e439-443b-a5f9-55ea9b90107f-operator-scripts\") pod \"06696730-e439-443b-a5f9-55ea9b90107f\" (UID: \"06696730-e439-443b-a5f9-55ea9b90107f\") " Dec 06 06:45:29 crc kubenswrapper[4823]: I1206 06:45:29.926141 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6566b\" (UniqueName: \"kubernetes.io/projected/06696730-e439-443b-a5f9-55ea9b90107f-kube-api-access-6566b\") pod \"06696730-e439-443b-a5f9-55ea9b90107f\" (UID: \"06696730-e439-443b-a5f9-55ea9b90107f\") " Dec 06 06:45:29 crc kubenswrapper[4823]: I1206 06:45:29.926726 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06696730-e439-443b-a5f9-55ea9b90107f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06696730-e439-443b-a5f9-55ea9b90107f" (UID: "06696730-e439-443b-a5f9-55ea9b90107f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:45:29 crc kubenswrapper[4823]: I1206 06:45:29.932550 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06696730-e439-443b-a5f9-55ea9b90107f-kube-api-access-6566b" (OuterVolumeSpecName: "kube-api-access-6566b") pod "06696730-e439-443b-a5f9-55ea9b90107f" (UID: "06696730-e439-443b-a5f9-55ea9b90107f"). InnerVolumeSpecName "kube-api-access-6566b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.028297 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06696730-e439-443b-a5f9-55ea9b90107f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.028352 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6566b\" (UniqueName: \"kubernetes.io/projected/06696730-e439-443b-a5f9-55ea9b90107f-kube-api-access-6566b\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:30 crc kubenswrapper[4823]: E1206 06:45:30.445289 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Dec 06 06:45:30 crc kubenswrapper[4823]: E1206 06:45:30.445628 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Dec 06 06:45:30 crc kubenswrapper[4823]: E1206 06:45:30.445845 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-db-sync,Image:38.102.83.174:5001/podified-master-centos10/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ld7fj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-db-sync-26bwc_openstack(aaeee530-df36-4fc7-96d5-b93755e8c4fe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:45:30 crc kubenswrapper[4823]: E1206 06:45:30.447255 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-db-sync-26bwc" podUID="aaeee530-df36-4fc7-96d5-b93755e8c4fe" Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.555405 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dchbm" Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.580377 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.639301 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gzj2\" (UniqueName: \"kubernetes.io/projected/98060624-4d67-42df-ba2a-5f70c05200c1-kube-api-access-7gzj2\") pod \"98060624-4d67-42df-ba2a-5f70c05200c1\" (UID: \"98060624-4d67-42df-ba2a-5f70c05200c1\") " Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.639367 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98060624-4d67-42df-ba2a-5f70c05200c1-operator-scripts\") pod \"98060624-4d67-42df-ba2a-5f70c05200c1\" (UID: \"98060624-4d67-42df-ba2a-5f70c05200c1\") " Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.640467 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98060624-4d67-42df-ba2a-5f70c05200c1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "98060624-4d67-42df-ba2a-5f70c05200c1" (UID: "98060624-4d67-42df-ba2a-5f70c05200c1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.646331 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98060624-4d67-42df-ba2a-5f70c05200c1-kube-api-access-7gzj2" (OuterVolumeSpecName: "kube-api-access-7gzj2") pod "98060624-4d67-42df-ba2a-5f70c05200c1" (UID: "98060624-4d67-42df-ba2a-5f70c05200c1"). InnerVolumeSpecName "kube-api-access-7gzj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.741741 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eabecdc3-1a42-4340-9bf7-6fccb70224b3-ovsdbserver-sb\") pod \"eabecdc3-1a42-4340-9bf7-6fccb70224b3\" (UID: \"eabecdc3-1a42-4340-9bf7-6fccb70224b3\") " Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.742228 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eabecdc3-1a42-4340-9bf7-6fccb70224b3-ovsdbserver-nb\") pod \"eabecdc3-1a42-4340-9bf7-6fccb70224b3\" (UID: \"eabecdc3-1a42-4340-9bf7-6fccb70224b3\") " Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.742331 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwcsm\" (UniqueName: \"kubernetes.io/projected/eabecdc3-1a42-4340-9bf7-6fccb70224b3-kube-api-access-pwcsm\") pod \"eabecdc3-1a42-4340-9bf7-6fccb70224b3\" (UID: \"eabecdc3-1a42-4340-9bf7-6fccb70224b3\") " Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.742360 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eabecdc3-1a42-4340-9bf7-6fccb70224b3-config\") pod \"eabecdc3-1a42-4340-9bf7-6fccb70224b3\" (UID: \"eabecdc3-1a42-4340-9bf7-6fccb70224b3\") " Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.742470 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eabecdc3-1a42-4340-9bf7-6fccb70224b3-dns-svc\") pod \"eabecdc3-1a42-4340-9bf7-6fccb70224b3\" (UID: \"eabecdc3-1a42-4340-9bf7-6fccb70224b3\") " Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.742970 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gzj2\" (UniqueName: \"kubernetes.io/projected/98060624-4d67-42df-ba2a-5f70c05200c1-kube-api-access-7gzj2\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.742990 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98060624-4d67-42df-ba2a-5f70c05200c1-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.757373 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eabecdc3-1a42-4340-9bf7-6fccb70224b3-kube-api-access-pwcsm" (OuterVolumeSpecName: "kube-api-access-pwcsm") pod "eabecdc3-1a42-4340-9bf7-6fccb70224b3" (UID: "eabecdc3-1a42-4340-9bf7-6fccb70224b3"). InnerVolumeSpecName "kube-api-access-pwcsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.818007 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dchbm" event={"ID":"98060624-4d67-42df-ba2a-5f70c05200c1","Type":"ContainerDied","Data":"093780bd5cbe2ef1a6a4e0459d4ec0b35b62ed64440424ee71c783ede33f22eb"} Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.818058 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="093780bd5cbe2ef1a6a4e0459d4ec0b35b62ed64440424ee71c783ede33f22eb" Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.818132 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dchbm" Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.820511 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" event={"ID":"eabecdc3-1a42-4340-9bf7-6fccb70224b3","Type":"ContainerDied","Data":"542e734bf9035713231aa65d9dc1e598ad6acb2db32b6ea8a56d2b6b65976cd4"} Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.820594 4823 scope.go:117] "RemoveContainer" containerID="5a97807bff5e3069d6d2d75d09f91f2fbe7f8be3db4517bf690aba8a4f10974a" Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.820876 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.823429 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d605a044-9bcd-4e5f-a44f-71cf32706e46","Type":"ContainerStarted","Data":"4c36b34b4537cc4dcb71d9581dcad3244b7b0a4dea851026a6098f1ade8f2f2b"} Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.842941 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-clm65" Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.843832 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bll7z" event={"ID":"ab7a2652-1281-4158-8bac-c547abce2fed","Type":"ContainerStarted","Data":"1a7a6ef9c112f5f304cca722950a035aa057a7d9023c2567facbd46c1f07ec76"} Dec 06 06:45:30 crc kubenswrapper[4823]: E1206 06:45:30.845584 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.174:5001/podified-master-centos10/openstack-watcher-api:watcher_latest\\\"\"" pod="openstack/watcher-db-sync-26bwc" podUID="aaeee530-df36-4fc7-96d5-b93755e8c4fe" Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.857100 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwcsm\" (UniqueName: \"kubernetes.io/projected/eabecdc3-1a42-4340-9bf7-6fccb70224b3-kube-api-access-pwcsm\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.869284 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eabecdc3-1a42-4340-9bf7-6fccb70224b3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eabecdc3-1a42-4340-9bf7-6fccb70224b3" (UID: "eabecdc3-1a42-4340-9bf7-6fccb70224b3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.873801 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eabecdc3-1a42-4340-9bf7-6fccb70224b3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eabecdc3-1a42-4340-9bf7-6fccb70224b3" (UID: "eabecdc3-1a42-4340-9bf7-6fccb70224b3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.882275 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eabecdc3-1a42-4340-9bf7-6fccb70224b3-config" (OuterVolumeSpecName: "config") pod "eabecdc3-1a42-4340-9bf7-6fccb70224b3" (UID: "eabecdc3-1a42-4340-9bf7-6fccb70224b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.883911 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-bll7z" podStartSLOduration=2.302001212 podStartE2EDuration="17.883848548s" podCreationTimestamp="2025-12-06 06:45:13 +0000 UTC" firstStartedPulling="2025-12-06 06:45:14.866981301 +0000 UTC m=+1216.152733261" lastFinishedPulling="2025-12-06 06:45:30.448828637 +0000 UTC m=+1231.734580597" observedRunningTime="2025-12-06 06:45:30.881823909 +0000 UTC m=+1232.167575869" watchObservedRunningTime="2025-12-06 06:45:30.883848548 +0000 UTC m=+1232.169600508" Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.890168 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eabecdc3-1a42-4340-9bf7-6fccb70224b3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eabecdc3-1a42-4340-9bf7-6fccb70224b3" (UID: "eabecdc3-1a42-4340-9bf7-6fccb70224b3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.958629 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eabecdc3-1a42-4340-9bf7-6fccb70224b3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.958759 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eabecdc3-1a42-4340-9bf7-6fccb70224b3-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.958774 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eabecdc3-1a42-4340-9bf7-6fccb70224b3-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.958785 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eabecdc3-1a42-4340-9bf7-6fccb70224b3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:30 crc kubenswrapper[4823]: I1206 06:45:30.978343 4823 scope.go:117] "RemoveContainer" containerID="a8419210f1976208c4cdb29e6cc83e1addcb0067fa10753e6272c971775a3be0" Dec 06 06:45:31 crc kubenswrapper[4823]: I1206 06:45:31.161807 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68b6cd6f45-wj5b8"] Dec 06 06:45:31 crc kubenswrapper[4823]: I1206 06:45:31.168216 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68b6cd6f45-wj5b8"] Dec 06 06:45:33 crc kubenswrapper[4823]: I1206 06:45:33.150520 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eabecdc3-1a42-4340-9bf7-6fccb70224b3" path="/var/lib/kubelet/pods/eabecdc3-1a42-4340-9bf7-6fccb70224b3/volumes" Dec 06 06:45:33 crc kubenswrapper[4823]: I1206 06:45:33.876957 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d605a044-9bcd-4e5f-a44f-71cf32706e46","Type":"ContainerStarted","Data":"751aa7d07b9859478e77c21740a14ea68c0d8e0c9f752c81222f44b2de1806c2"} Dec 06 06:45:34 crc kubenswrapper[4823]: I1206 06:45:34.850382 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68b6cd6f45-wj5b8" podUID="eabecdc3-1a42-4340-9bf7-6fccb70224b3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: i/o timeout" Dec 06 06:45:34 crc kubenswrapper[4823]: I1206 06:45:34.888720 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d605a044-9bcd-4e5f-a44f-71cf32706e46","Type":"ContainerStarted","Data":"310adfc8df74c4df729f5df0c50271f4a811ef780a92de7e8a08fb38148f185c"} Dec 06 06:45:34 crc kubenswrapper[4823]: I1206 06:45:34.921448 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=22.921425108 podStartE2EDuration="22.921425108s" podCreationTimestamp="2025-12-06 06:45:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:45:34.913778227 +0000 UTC m=+1236.199530197" watchObservedRunningTime="2025-12-06 06:45:34.921425108 +0000 UTC m=+1236.207177068" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.747973 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-9fd7r"] Dec 06 06:45:35 crc kubenswrapper[4823]: E1206 06:45:35.748489 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94ad2bfb-9313-4afb-84aa-b42f108da314" containerName="mariadb-account-create-update" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.748518 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="94ad2bfb-9313-4afb-84aa-b42f108da314" containerName="mariadb-account-create-update" Dec 06 06:45:35 crc kubenswrapper[4823]: E1206 06:45:35.748532 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98060624-4d67-42df-ba2a-5f70c05200c1" containerName="mariadb-database-create" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.748541 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="98060624-4d67-42df-ba2a-5f70c05200c1" containerName="mariadb-database-create" Dec 06 06:45:35 crc kubenswrapper[4823]: E1206 06:45:35.748549 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f2325c2-986c-4b21-b734-6e4a6b0c8199" containerName="mariadb-account-create-update" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.748556 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f2325c2-986c-4b21-b734-6e4a6b0c8199" containerName="mariadb-account-create-update" Dec 06 06:45:35 crc kubenswrapper[4823]: E1206 06:45:35.748586 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eabecdc3-1a42-4340-9bf7-6fccb70224b3" containerName="dnsmasq-dns" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.748594 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="eabecdc3-1a42-4340-9bf7-6fccb70224b3" containerName="dnsmasq-dns" Dec 06 06:45:35 crc kubenswrapper[4823]: E1206 06:45:35.748608 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9a360db-ef02-428f-85fc-2470c362c39e" containerName="mariadb-account-create-update" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.748616 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9a360db-ef02-428f-85fc-2470c362c39e" containerName="mariadb-account-create-update" Dec 06 06:45:35 crc kubenswrapper[4823]: E1206 06:45:35.748631 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="222d27f2-d83e-4213-b3f4-83dd6c5d14e7" containerName="mariadb-database-create" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.748638 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="222d27f2-d83e-4213-b3f4-83dd6c5d14e7" containerName="mariadb-database-create" Dec 06 06:45:35 crc kubenswrapper[4823]: E1206 06:45:35.748651 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca812df-30ed-47ad-9a3f-5fbb17d7032d" containerName="mariadb-database-create" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.748679 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca812df-30ed-47ad-9a3f-5fbb17d7032d" containerName="mariadb-database-create" Dec 06 06:45:35 crc kubenswrapper[4823]: E1206 06:45:35.748691 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06696730-e439-443b-a5f9-55ea9b90107f" containerName="mariadb-database-create" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.748699 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="06696730-e439-443b-a5f9-55ea9b90107f" containerName="mariadb-database-create" Dec 06 06:45:35 crc kubenswrapper[4823]: E1206 06:45:35.748708 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eabecdc3-1a42-4340-9bf7-6fccb70224b3" containerName="init" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.748715 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="eabecdc3-1a42-4340-9bf7-6fccb70224b3" containerName="init" Dec 06 06:45:35 crc kubenswrapper[4823]: E1206 06:45:35.748729 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d32988-ae9c-4b85-ac33-e847bebb88c9" containerName="mariadb-account-create-update" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.748737 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d32988-ae9c-4b85-ac33-e847bebb88c9" containerName="mariadb-account-create-update" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.748928 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="94ad2bfb-9313-4afb-84aa-b42f108da314" containerName="mariadb-account-create-update" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.748942 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="98060624-4d67-42df-ba2a-5f70c05200c1" containerName="mariadb-database-create" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.748961 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="eabecdc3-1a42-4340-9bf7-6fccb70224b3" containerName="dnsmasq-dns" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.748973 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9a360db-ef02-428f-85fc-2470c362c39e" containerName="mariadb-account-create-update" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.748982 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="75d32988-ae9c-4b85-ac33-e847bebb88c9" containerName="mariadb-account-create-update" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.748992 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="222d27f2-d83e-4213-b3f4-83dd6c5d14e7" containerName="mariadb-database-create" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.749006 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f2325c2-986c-4b21-b734-6e4a6b0c8199" containerName="mariadb-account-create-update" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.749020 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="06696730-e439-443b-a5f9-55ea9b90107f" containerName="mariadb-database-create" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.749042 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca812df-30ed-47ad-9a3f-5fbb17d7032d" containerName="mariadb-database-create" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.750035 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9fd7r" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.753914 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.756615 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-rdcrh" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.760273 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9fd7r"] Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.847392 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6qtl\" (UniqueName: \"kubernetes.io/projected/f5301842-d5df-4df6-8699-56f86789df64-kube-api-access-b6qtl\") pod \"glance-db-sync-9fd7r\" (UID: \"f5301842-d5df-4df6-8699-56f86789df64\") " pod="openstack/glance-db-sync-9fd7r" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.847458 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5301842-d5df-4df6-8699-56f86789df64-combined-ca-bundle\") pod \"glance-db-sync-9fd7r\" (UID: \"f5301842-d5df-4df6-8699-56f86789df64\") " pod="openstack/glance-db-sync-9fd7r" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.847591 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f5301842-d5df-4df6-8699-56f86789df64-db-sync-config-data\") pod \"glance-db-sync-9fd7r\" (UID: \"f5301842-d5df-4df6-8699-56f86789df64\") " pod="openstack/glance-db-sync-9fd7r" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.847646 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5301842-d5df-4df6-8699-56f86789df64-config-data\") pod \"glance-db-sync-9fd7r\" (UID: \"f5301842-d5df-4df6-8699-56f86789df64\") " pod="openstack/glance-db-sync-9fd7r" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.901185 4823 generic.go:334] "Generic (PLEG): container finished" podID="ab7a2652-1281-4158-8bac-c547abce2fed" containerID="1a7a6ef9c112f5f304cca722950a035aa057a7d9023c2567facbd46c1f07ec76" exitCode=0 Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.901239 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bll7z" event={"ID":"ab7a2652-1281-4158-8bac-c547abce2fed","Type":"ContainerDied","Data":"1a7a6ef9c112f5f304cca722950a035aa057a7d9023c2567facbd46c1f07ec76"} Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.949822 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f5301842-d5df-4df6-8699-56f86789df64-db-sync-config-data\") pod \"glance-db-sync-9fd7r\" (UID: \"f5301842-d5df-4df6-8699-56f86789df64\") " pod="openstack/glance-db-sync-9fd7r" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.949924 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5301842-d5df-4df6-8699-56f86789df64-config-data\") pod \"glance-db-sync-9fd7r\" (UID: \"f5301842-d5df-4df6-8699-56f86789df64\") " pod="openstack/glance-db-sync-9fd7r" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.950212 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6qtl\" (UniqueName: \"kubernetes.io/projected/f5301842-d5df-4df6-8699-56f86789df64-kube-api-access-b6qtl\") pod \"glance-db-sync-9fd7r\" (UID: \"f5301842-d5df-4df6-8699-56f86789df64\") " pod="openstack/glance-db-sync-9fd7r" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.950268 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5301842-d5df-4df6-8699-56f86789df64-combined-ca-bundle\") pod \"glance-db-sync-9fd7r\" (UID: \"f5301842-d5df-4df6-8699-56f86789df64\") " pod="openstack/glance-db-sync-9fd7r" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.958346 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5301842-d5df-4df6-8699-56f86789df64-config-data\") pod \"glance-db-sync-9fd7r\" (UID: \"f5301842-d5df-4df6-8699-56f86789df64\") " pod="openstack/glance-db-sync-9fd7r" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.958530 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5301842-d5df-4df6-8699-56f86789df64-combined-ca-bundle\") pod \"glance-db-sync-9fd7r\" (UID: \"f5301842-d5df-4df6-8699-56f86789df64\") " pod="openstack/glance-db-sync-9fd7r" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.969222 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f5301842-d5df-4df6-8699-56f86789df64-db-sync-config-data\") pod \"glance-db-sync-9fd7r\" (UID: \"f5301842-d5df-4df6-8699-56f86789df64\") " pod="openstack/glance-db-sync-9fd7r" Dec 06 06:45:35 crc kubenswrapper[4823]: I1206 06:45:35.978413 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6qtl\" (UniqueName: \"kubernetes.io/projected/f5301842-d5df-4df6-8699-56f86789df64-kube-api-access-b6qtl\") pod \"glance-db-sync-9fd7r\" (UID: \"f5301842-d5df-4df6-8699-56f86789df64\") " pod="openstack/glance-db-sync-9fd7r" Dec 06 06:45:36 crc kubenswrapper[4823]: I1206 06:45:36.052258 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:45:36 crc kubenswrapper[4823]: I1206 06:45:36.052322 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:45:36 crc kubenswrapper[4823]: I1206 06:45:36.052368 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 06:45:36 crc kubenswrapper[4823]: I1206 06:45:36.053299 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3a9115986422c421655f98d90d9af3c203435cfaca9c79b7b491e0d1286a3843"} pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 06:45:36 crc kubenswrapper[4823]: I1206 06:45:36.053365 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" containerID="cri-o://3a9115986422c421655f98d90d9af3c203435cfaca9c79b7b491e0d1286a3843" gracePeriod=600 Dec 06 06:45:36 crc kubenswrapper[4823]: I1206 06:45:36.072585 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9fd7r" Dec 06 06:45:36 crc kubenswrapper[4823]: I1206 06:45:36.693987 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9fd7r"] Dec 06 06:45:36 crc kubenswrapper[4823]: W1206 06:45:36.699221 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5301842_d5df_4df6_8699_56f86789df64.slice/crio-9055b8e4bc5d1b5fa577d5d423c1c5a25638f38458794eefb8ae7e987f36388e WatchSource:0}: Error finding container 9055b8e4bc5d1b5fa577d5d423c1c5a25638f38458794eefb8ae7e987f36388e: Status 404 returned error can't find the container with id 9055b8e4bc5d1b5fa577d5d423c1c5a25638f38458794eefb8ae7e987f36388e Dec 06 06:45:37 crc kubenswrapper[4823]: I1206 06:45:37.047544 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9fd7r" event={"ID":"f5301842-d5df-4df6-8699-56f86789df64","Type":"ContainerStarted","Data":"9055b8e4bc5d1b5fa577d5d423c1c5a25638f38458794eefb8ae7e987f36388e"} Dec 06 06:45:37 crc kubenswrapper[4823]: I1206 06:45:37.057328 4823 generic.go:334] "Generic (PLEG): container finished" podID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerID="3a9115986422c421655f98d90d9af3c203435cfaca9c79b7b491e0d1286a3843" exitCode=0 Dec 06 06:45:37 crc kubenswrapper[4823]: I1206 06:45:37.057411 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerDied","Data":"3a9115986422c421655f98d90d9af3c203435cfaca9c79b7b491e0d1286a3843"} Dec 06 06:45:37 crc kubenswrapper[4823]: I1206 06:45:37.057468 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerStarted","Data":"cf0da5e873b0675ce3affbf1aff07940b681c1bb20491ade8083d807561c411f"} Dec 06 06:45:37 crc kubenswrapper[4823]: I1206 06:45:37.057489 4823 scope.go:117] "RemoveContainer" containerID="2a1cf76af8a6f384ac47680b767c5129bfc1481da61050b03811147d1a619220" Dec 06 06:45:37 crc kubenswrapper[4823]: I1206 06:45:37.543238 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bll7z" Dec 06 06:45:37 crc kubenswrapper[4823]: I1206 06:45:37.580482 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:37 crc kubenswrapper[4823]: I1206 06:45:37.751882 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7a2652-1281-4158-8bac-c547abce2fed-config-data\") pod \"ab7a2652-1281-4158-8bac-c547abce2fed\" (UID: \"ab7a2652-1281-4158-8bac-c547abce2fed\") " Dec 06 06:45:37 crc kubenswrapper[4823]: I1206 06:45:37.752039 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g77w\" (UniqueName: \"kubernetes.io/projected/ab7a2652-1281-4158-8bac-c547abce2fed-kube-api-access-8g77w\") pod \"ab7a2652-1281-4158-8bac-c547abce2fed\" (UID: \"ab7a2652-1281-4158-8bac-c547abce2fed\") " Dec 06 06:45:37 crc kubenswrapper[4823]: I1206 06:45:37.752084 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7a2652-1281-4158-8bac-c547abce2fed-combined-ca-bundle\") pod \"ab7a2652-1281-4158-8bac-c547abce2fed\" (UID: \"ab7a2652-1281-4158-8bac-c547abce2fed\") " Dec 06 06:45:37 crc kubenswrapper[4823]: I1206 06:45:37.779551 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab7a2652-1281-4158-8bac-c547abce2fed-kube-api-access-8g77w" (OuterVolumeSpecName: "kube-api-access-8g77w") pod "ab7a2652-1281-4158-8bac-c547abce2fed" (UID: "ab7a2652-1281-4158-8bac-c547abce2fed"). InnerVolumeSpecName "kube-api-access-8g77w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:45:37 crc kubenswrapper[4823]: I1206 06:45:37.793722 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab7a2652-1281-4158-8bac-c547abce2fed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab7a2652-1281-4158-8bac-c547abce2fed" (UID: "ab7a2652-1281-4158-8bac-c547abce2fed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:45:37 crc kubenswrapper[4823]: I1206 06:45:37.837383 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab7a2652-1281-4158-8bac-c547abce2fed-config-data" (OuterVolumeSpecName: "config-data") pod "ab7a2652-1281-4158-8bac-c547abce2fed" (UID: "ab7a2652-1281-4158-8bac-c547abce2fed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:45:37 crc kubenswrapper[4823]: I1206 06:45:37.853832 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7a2652-1281-4158-8bac-c547abce2fed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:37 crc kubenswrapper[4823]: I1206 06:45:37.853864 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7a2652-1281-4158-8bac-c547abce2fed-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:37 crc kubenswrapper[4823]: I1206 06:45:37.853880 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g77w\" (UniqueName: \"kubernetes.io/projected/ab7a2652-1281-4158-8bac-c547abce2fed-kube-api-access-8g77w\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.073629 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bll7z" event={"ID":"ab7a2652-1281-4158-8bac-c547abce2fed","Type":"ContainerDied","Data":"2a570e76de7ecef71851c071fb80becc2c1042bf31df89987a9dac86485c6c6b"} Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.073703 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a570e76de7ecef71851c071fb80becc2c1042bf31df89987a9dac86485c6c6b" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.073777 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bll7z" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.468156 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86bd4959b7-8gj7b"] Dec 06 06:45:38 crc kubenswrapper[4823]: E1206 06:45:38.469226 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7a2652-1281-4158-8bac-c547abce2fed" containerName="keystone-db-sync" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.469340 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7a2652-1281-4158-8bac-c547abce2fed" containerName="keystone-db-sync" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.469634 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7a2652-1281-4158-8bac-c547abce2fed" containerName="keystone-db-sync" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.471000 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bd4959b7-8gj7b" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.527828 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86bd4959b7-8gj7b"] Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.550005 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-ktsq4"] Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.551457 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ktsq4" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.558150 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.558251 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.569649 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.570085 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.570243 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5tv2h" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.589997 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ktsq4"] Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.674788 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-scripts\") pod \"keystone-bootstrap-ktsq4\" (UID: \"03a2a291-9168-4869-9c60-f7c281733b5b\") " pod="openstack/keystone-bootstrap-ktsq4" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.674886 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-ovsdbserver-sb\") pod \"dnsmasq-dns-86bd4959b7-8gj7b\" (UID: \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\") " pod="openstack/dnsmasq-dns-86bd4959b7-8gj7b" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.675242 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-config\") pod \"dnsmasq-dns-86bd4959b7-8gj7b\" (UID: \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\") " pod="openstack/dnsmasq-dns-86bd4959b7-8gj7b" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.675283 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqnhl\" (UniqueName: \"kubernetes.io/projected/03a2a291-9168-4869-9c60-f7c281733b5b-kube-api-access-mqnhl\") pod \"keystone-bootstrap-ktsq4\" (UID: \"03a2a291-9168-4869-9c60-f7c281733b5b\") " pod="openstack/keystone-bootstrap-ktsq4" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.675326 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-dns-swift-storage-0\") pod \"dnsmasq-dns-86bd4959b7-8gj7b\" (UID: \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\") " pod="openstack/dnsmasq-dns-86bd4959b7-8gj7b" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.675383 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-fernet-keys\") pod \"keystone-bootstrap-ktsq4\" (UID: \"03a2a291-9168-4869-9c60-f7c281733b5b\") " pod="openstack/keystone-bootstrap-ktsq4" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.675408 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-credential-keys\") pod \"keystone-bootstrap-ktsq4\" (UID: \"03a2a291-9168-4869-9c60-f7c281733b5b\") " pod="openstack/keystone-bootstrap-ktsq4" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.675435 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2klxp\" (UniqueName: \"kubernetes.io/projected/0bdf563e-881e-4e35-a95e-8ac49f9c498e-kube-api-access-2klxp\") pod \"dnsmasq-dns-86bd4959b7-8gj7b\" (UID: \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\") " pod="openstack/dnsmasq-dns-86bd4959b7-8gj7b" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.675460 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-dns-svc\") pod \"dnsmasq-dns-86bd4959b7-8gj7b\" (UID: \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\") " pod="openstack/dnsmasq-dns-86bd4959b7-8gj7b" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.675490 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-ovsdbserver-nb\") pod \"dnsmasq-dns-86bd4959b7-8gj7b\" (UID: \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\") " pod="openstack/dnsmasq-dns-86bd4959b7-8gj7b" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.675523 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-config-data\") pod \"keystone-bootstrap-ktsq4\" (UID: \"03a2a291-9168-4869-9c60-f7c281733b5b\") " pod="openstack/keystone-bootstrap-ktsq4" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.675556 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-combined-ca-bundle\") pod \"keystone-bootstrap-ktsq4\" (UID: \"03a2a291-9168-4869-9c60-f7c281733b5b\") " pod="openstack/keystone-bootstrap-ktsq4" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.778116 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-fernet-keys\") pod \"keystone-bootstrap-ktsq4\" (UID: \"03a2a291-9168-4869-9c60-f7c281733b5b\") " pod="openstack/keystone-bootstrap-ktsq4" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.778445 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-credential-keys\") pod \"keystone-bootstrap-ktsq4\" (UID: \"03a2a291-9168-4869-9c60-f7c281733b5b\") " pod="openstack/keystone-bootstrap-ktsq4" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.778469 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2klxp\" (UniqueName: \"kubernetes.io/projected/0bdf563e-881e-4e35-a95e-8ac49f9c498e-kube-api-access-2klxp\") pod \"dnsmasq-dns-86bd4959b7-8gj7b\" (UID: \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\") " pod="openstack/dnsmasq-dns-86bd4959b7-8gj7b" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.778493 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-dns-svc\") pod \"dnsmasq-dns-86bd4959b7-8gj7b\" (UID: \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\") " pod="openstack/dnsmasq-dns-86bd4959b7-8gj7b" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.778512 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-ovsdbserver-nb\") pod \"dnsmasq-dns-86bd4959b7-8gj7b\" (UID: \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\") " pod="openstack/dnsmasq-dns-86bd4959b7-8gj7b" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.778543 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-config-data\") pod \"keystone-bootstrap-ktsq4\" (UID: \"03a2a291-9168-4869-9c60-f7c281733b5b\") " pod="openstack/keystone-bootstrap-ktsq4" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.778580 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-combined-ca-bundle\") pod \"keystone-bootstrap-ktsq4\" (UID: \"03a2a291-9168-4869-9c60-f7c281733b5b\") " pod="openstack/keystone-bootstrap-ktsq4" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.778626 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-scripts\") pod \"keystone-bootstrap-ktsq4\" (UID: \"03a2a291-9168-4869-9c60-f7c281733b5b\") " pod="openstack/keystone-bootstrap-ktsq4" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.778712 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-ovsdbserver-sb\") pod \"dnsmasq-dns-86bd4959b7-8gj7b\" (UID: \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\") " pod="openstack/dnsmasq-dns-86bd4959b7-8gj7b" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.778779 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-config\") pod \"dnsmasq-dns-86bd4959b7-8gj7b\" (UID: \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\") " pod="openstack/dnsmasq-dns-86bd4959b7-8gj7b" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.778817 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqnhl\" (UniqueName: \"kubernetes.io/projected/03a2a291-9168-4869-9c60-f7c281733b5b-kube-api-access-mqnhl\") pod \"keystone-bootstrap-ktsq4\" (UID: \"03a2a291-9168-4869-9c60-f7c281733b5b\") " pod="openstack/keystone-bootstrap-ktsq4" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.778852 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-dns-swift-storage-0\") pod \"dnsmasq-dns-86bd4959b7-8gj7b\" (UID: \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\") " pod="openstack/dnsmasq-dns-86bd4959b7-8gj7b" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.780023 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-dns-swift-storage-0\") pod \"dnsmasq-dns-86bd4959b7-8gj7b\" (UID: \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\") " pod="openstack/dnsmasq-dns-86bd4959b7-8gj7b" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.783816 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-dns-svc\") pod \"dnsmasq-dns-86bd4959b7-8gj7b\" (UID: \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\") " pod="openstack/dnsmasq-dns-86bd4959b7-8gj7b" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.784605 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-ovsdbserver-sb\") pod \"dnsmasq-dns-86bd4959b7-8gj7b\" (UID: \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\") " pod="openstack/dnsmasq-dns-86bd4959b7-8gj7b" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.785614 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-ovsdbserver-nb\") pod \"dnsmasq-dns-86bd4959b7-8gj7b\" (UID: \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\") " pod="openstack/dnsmasq-dns-86bd4959b7-8gj7b" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.786158 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-config\") pod \"dnsmasq-dns-86bd4959b7-8gj7b\" (UID: \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\") " pod="openstack/dnsmasq-dns-86bd4959b7-8gj7b" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.790938 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-fernet-keys\") pod \"keystone-bootstrap-ktsq4\" (UID: \"03a2a291-9168-4869-9c60-f7c281733b5b\") " pod="openstack/keystone-bootstrap-ktsq4" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.792350 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-config-data\") pod \"keystone-bootstrap-ktsq4\" (UID: \"03a2a291-9168-4869-9c60-f7c281733b5b\") " pod="openstack/keystone-bootstrap-ktsq4" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.802927 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-scripts\") pod \"keystone-bootstrap-ktsq4\" (UID: \"03a2a291-9168-4869-9c60-f7c281733b5b\") " pod="openstack/keystone-bootstrap-ktsq4" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.803079 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-credential-keys\") pod \"keystone-bootstrap-ktsq4\" (UID: \"03a2a291-9168-4869-9c60-f7c281733b5b\") " pod="openstack/keystone-bootstrap-ktsq4" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.827699 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2klxp\" (UniqueName: \"kubernetes.io/projected/0bdf563e-881e-4e35-a95e-8ac49f9c498e-kube-api-access-2klxp\") pod \"dnsmasq-dns-86bd4959b7-8gj7b\" (UID: \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\") " pod="openstack/dnsmasq-dns-86bd4959b7-8gj7b" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.830835 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-combined-ca-bundle\") pod \"keystone-bootstrap-ktsq4\" (UID: \"03a2a291-9168-4869-9c60-f7c281733b5b\") " pod="openstack/keystone-bootstrap-ktsq4" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.836790 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqnhl\" (UniqueName: \"kubernetes.io/projected/03a2a291-9168-4869-9c60-f7c281733b5b-kube-api-access-mqnhl\") pod \"keystone-bootstrap-ktsq4\" (UID: \"03a2a291-9168-4869-9c60-f7c281733b5b\") " pod="openstack/keystone-bootstrap-ktsq4" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.842179 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-kls2x"] Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.846192 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kls2x" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.856530 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.856820 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2cwbd" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.859582 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.886788 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f78c4d669-7mcnd"] Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.889860 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f78c4d669-7mcnd" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.900221 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.900704 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.902709 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-v96f4" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.903434 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ktsq4" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.910746 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.913977 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f78c4d669-7mcnd"] Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.953747 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-kls2x"] Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.982793 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-x7fxv"] Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.984393 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x7fxv" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.985286 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/157d2d95-42a3-4f80-8c1d-b8c27bee49be-scripts\") pod \"cinder-db-sync-kls2x\" (UID: \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\") " pod="openstack/cinder-db-sync-kls2x" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.985342 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/157d2d95-42a3-4f80-8c1d-b8c27bee49be-config-data\") pod \"cinder-db-sync-kls2x\" (UID: \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\") " pod="openstack/cinder-db-sync-kls2x" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.985422 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84hxh\" (UniqueName: \"kubernetes.io/projected/157d2d95-42a3-4f80-8c1d-b8c27bee49be-kube-api-access-84hxh\") pod \"cinder-db-sync-kls2x\" (UID: \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\") " pod="openstack/cinder-db-sync-kls2x" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.985447 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/157d2d95-42a3-4f80-8c1d-b8c27bee49be-etc-machine-id\") pod \"cinder-db-sync-kls2x\" (UID: \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\") " pod="openstack/cinder-db-sync-kls2x" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.985497 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/157d2d95-42a3-4f80-8c1d-b8c27bee49be-combined-ca-bundle\") pod \"cinder-db-sync-kls2x\" (UID: \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\") " pod="openstack/cinder-db-sync-kls2x" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.985606 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/157d2d95-42a3-4f80-8c1d-b8c27bee49be-db-sync-config-data\") pod \"cinder-db-sync-kls2x\" (UID: \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\") " pod="openstack/cinder-db-sync-kls2x" Dec 06 06:45:38 crc kubenswrapper[4823]: I1206 06:45:38.989953 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.006288 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-dsznk" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.184432 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bd4959b7-8gj7b" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.185846 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84hxh\" (UniqueName: \"kubernetes.io/projected/157d2d95-42a3-4f80-8c1d-b8c27bee49be-kube-api-access-84hxh\") pod \"cinder-db-sync-kls2x\" (UID: \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\") " pod="openstack/cinder-db-sync-kls2x" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.185904 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/157d2d95-42a3-4f80-8c1d-b8c27bee49be-etc-machine-id\") pod \"cinder-db-sync-kls2x\" (UID: \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\") " pod="openstack/cinder-db-sync-kls2x" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.185974 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/157d2d95-42a3-4f80-8c1d-b8c27bee49be-combined-ca-bundle\") pod \"cinder-db-sync-kls2x\" (UID: \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\") " pod="openstack/cinder-db-sync-kls2x" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.186004 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d04e917-34c8-4df1-bc89-69ca7b7753ac-combined-ca-bundle\") pod \"barbican-db-sync-x7fxv\" (UID: \"3d04e917-34c8-4df1-bc89-69ca7b7753ac\") " pod="openstack/barbican-db-sync-x7fxv" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.186039 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plqv9\" (UniqueName: \"kubernetes.io/projected/3d04e917-34c8-4df1-bc89-69ca7b7753ac-kube-api-access-plqv9\") pod \"barbican-db-sync-x7fxv\" (UID: \"3d04e917-34c8-4df1-bc89-69ca7b7753ac\") " pod="openstack/barbican-db-sync-x7fxv" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.186078 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/157d2d95-42a3-4f80-8c1d-b8c27bee49be-db-sync-config-data\") pod \"cinder-db-sync-kls2x\" (UID: \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\") " pod="openstack/cinder-db-sync-kls2x" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.186114 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkqzk\" (UniqueName: \"kubernetes.io/projected/786cb861-417a-49bc-a619-afb242b5d8c2-kube-api-access-rkqzk\") pod \"horizon-6f78c4d669-7mcnd\" (UID: \"786cb861-417a-49bc-a619-afb242b5d8c2\") " pod="openstack/horizon-6f78c4d669-7mcnd" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.186148 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/157d2d95-42a3-4f80-8c1d-b8c27bee49be-scripts\") pod \"cinder-db-sync-kls2x\" (UID: \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\") " pod="openstack/cinder-db-sync-kls2x" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.186177 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/157d2d95-42a3-4f80-8c1d-b8c27bee49be-config-data\") pod \"cinder-db-sync-kls2x\" (UID: \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\") " pod="openstack/cinder-db-sync-kls2x" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.186212 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/786cb861-417a-49bc-a619-afb242b5d8c2-scripts\") pod \"horizon-6f78c4d669-7mcnd\" (UID: \"786cb861-417a-49bc-a619-afb242b5d8c2\") " pod="openstack/horizon-6f78c4d669-7mcnd" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.186238 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/786cb861-417a-49bc-a619-afb242b5d8c2-logs\") pod \"horizon-6f78c4d669-7mcnd\" (UID: \"786cb861-417a-49bc-a619-afb242b5d8c2\") " pod="openstack/horizon-6f78c4d669-7mcnd" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.186264 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/786cb861-417a-49bc-a619-afb242b5d8c2-config-data\") pod \"horizon-6f78c4d669-7mcnd\" (UID: \"786cb861-417a-49bc-a619-afb242b5d8c2\") " pod="openstack/horizon-6f78c4d669-7mcnd" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.186310 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/786cb861-417a-49bc-a619-afb242b5d8c2-horizon-secret-key\") pod \"horizon-6f78c4d669-7mcnd\" (UID: \"786cb861-417a-49bc-a619-afb242b5d8c2\") " pod="openstack/horizon-6f78c4d669-7mcnd" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.186339 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3d04e917-34c8-4df1-bc89-69ca7b7753ac-db-sync-config-data\") pod \"barbican-db-sync-x7fxv\" (UID: \"3d04e917-34c8-4df1-bc89-69ca7b7753ac\") " pod="openstack/barbican-db-sync-x7fxv" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.186786 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/157d2d95-42a3-4f80-8c1d-b8c27bee49be-etc-machine-id\") pod \"cinder-db-sync-kls2x\" (UID: \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\") " pod="openstack/cinder-db-sync-kls2x" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.286690 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/157d2d95-42a3-4f80-8c1d-b8c27bee49be-db-sync-config-data\") pod \"cinder-db-sync-kls2x\" (UID: \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\") " pod="openstack/cinder-db-sync-kls2x" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.288081 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/157d2d95-42a3-4f80-8c1d-b8c27bee49be-config-data\") pod \"cinder-db-sync-kls2x\" (UID: \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\") " pod="openstack/cinder-db-sync-kls2x" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.288622 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/157d2d95-42a3-4f80-8c1d-b8c27bee49be-scripts\") pod \"cinder-db-sync-kls2x\" (UID: \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\") " pod="openstack/cinder-db-sync-kls2x" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.289255 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/157d2d95-42a3-4f80-8c1d-b8c27bee49be-combined-ca-bundle\") pod \"cinder-db-sync-kls2x\" (UID: \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\") " pod="openstack/cinder-db-sync-kls2x" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.416357 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkqzk\" (UniqueName: \"kubernetes.io/projected/786cb861-417a-49bc-a619-afb242b5d8c2-kube-api-access-rkqzk\") pod \"horizon-6f78c4d669-7mcnd\" (UID: \"786cb861-417a-49bc-a619-afb242b5d8c2\") " pod="openstack/horizon-6f78c4d669-7mcnd" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.416495 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/786cb861-417a-49bc-a619-afb242b5d8c2-scripts\") pod \"horizon-6f78c4d669-7mcnd\" (UID: \"786cb861-417a-49bc-a619-afb242b5d8c2\") " pod="openstack/horizon-6f78c4d669-7mcnd" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.416536 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/786cb861-417a-49bc-a619-afb242b5d8c2-logs\") pod \"horizon-6f78c4d669-7mcnd\" (UID: \"786cb861-417a-49bc-a619-afb242b5d8c2\") " pod="openstack/horizon-6f78c4d669-7mcnd" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.416562 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/786cb861-417a-49bc-a619-afb242b5d8c2-config-data\") pod \"horizon-6f78c4d669-7mcnd\" (UID: \"786cb861-417a-49bc-a619-afb242b5d8c2\") " pod="openstack/horizon-6f78c4d669-7mcnd" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.416649 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/786cb861-417a-49bc-a619-afb242b5d8c2-horizon-secret-key\") pod \"horizon-6f78c4d669-7mcnd\" (UID: \"786cb861-417a-49bc-a619-afb242b5d8c2\") " pod="openstack/horizon-6f78c4d669-7mcnd" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.416722 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3d04e917-34c8-4df1-bc89-69ca7b7753ac-db-sync-config-data\") pod \"barbican-db-sync-x7fxv\" (UID: \"3d04e917-34c8-4df1-bc89-69ca7b7753ac\") " pod="openstack/barbican-db-sync-x7fxv" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.416924 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d04e917-34c8-4df1-bc89-69ca7b7753ac-combined-ca-bundle\") pod \"barbican-db-sync-x7fxv\" (UID: \"3d04e917-34c8-4df1-bc89-69ca7b7753ac\") " pod="openstack/barbican-db-sync-x7fxv" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.416978 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plqv9\" (UniqueName: \"kubernetes.io/projected/3d04e917-34c8-4df1-bc89-69ca7b7753ac-kube-api-access-plqv9\") pod \"barbican-db-sync-x7fxv\" (UID: \"3d04e917-34c8-4df1-bc89-69ca7b7753ac\") " pod="openstack/barbican-db-sync-x7fxv" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.418292 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/786cb861-417a-49bc-a619-afb242b5d8c2-scripts\") pod \"horizon-6f78c4d669-7mcnd\" (UID: \"786cb861-417a-49bc-a619-afb242b5d8c2\") " pod="openstack/horizon-6f78c4d669-7mcnd" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.421278 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/786cb861-417a-49bc-a619-afb242b5d8c2-logs\") pod \"horizon-6f78c4d669-7mcnd\" (UID: \"786cb861-417a-49bc-a619-afb242b5d8c2\") " pod="openstack/horizon-6f78c4d669-7mcnd" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.427911 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/786cb861-417a-49bc-a619-afb242b5d8c2-horizon-secret-key\") pod \"horizon-6f78c4d669-7mcnd\" (UID: \"786cb861-417a-49bc-a619-afb242b5d8c2\") " pod="openstack/horizon-6f78c4d669-7mcnd" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.442423 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3d04e917-34c8-4df1-bc89-69ca7b7753ac-db-sync-config-data\") pod \"barbican-db-sync-x7fxv\" (UID: \"3d04e917-34c8-4df1-bc89-69ca7b7753ac\") " pod="openstack/barbican-db-sync-x7fxv" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.445071 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84hxh\" (UniqueName: \"kubernetes.io/projected/157d2d95-42a3-4f80-8c1d-b8c27bee49be-kube-api-access-84hxh\") pod \"cinder-db-sync-kls2x\" (UID: \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\") " pod="openstack/cinder-db-sync-kls2x" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.445651 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/786cb861-417a-49bc-a619-afb242b5d8c2-config-data\") pod \"horizon-6f78c4d669-7mcnd\" (UID: \"786cb861-417a-49bc-a619-afb242b5d8c2\") " pod="openstack/horizon-6f78c4d669-7mcnd" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.454528 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d04e917-34c8-4df1-bc89-69ca7b7753ac-combined-ca-bundle\") pod \"barbican-db-sync-x7fxv\" (UID: \"3d04e917-34c8-4df1-bc89-69ca7b7753ac\") " pod="openstack/barbican-db-sync-x7fxv" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.467557 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-dwpkn"] Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.474089 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-x7fxv"] Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.472682 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkqzk\" (UniqueName: \"kubernetes.io/projected/786cb861-417a-49bc-a619-afb242b5d8c2-kube-api-access-rkqzk\") pod \"horizon-6f78c4d669-7mcnd\" (UID: \"786cb861-417a-49bc-a619-afb242b5d8c2\") " pod="openstack/horizon-6f78c4d669-7mcnd" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.474256 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dwpkn" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.476274 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plqv9\" (UniqueName: \"kubernetes.io/projected/3d04e917-34c8-4df1-bc89-69ca7b7753ac-kube-api-access-plqv9\") pod \"barbican-db-sync-x7fxv\" (UID: \"3d04e917-34c8-4df1-bc89-69ca7b7753ac\") " pod="openstack/barbican-db-sync-x7fxv" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.481779 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.482124 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.484973 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-jp6tc" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.496783 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-dwpkn"] Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.535265 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.538747 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f78c4d669-7mcnd" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.543133 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.550477 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.550961 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.579251 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x7fxv" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.619952 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.635015 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kls2x" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.636434 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td9sg\" (UniqueName: \"kubernetes.io/projected/b984559e-efdf-4d21-917f-420506f550da-kube-api-access-td9sg\") pod \"neutron-db-sync-dwpkn\" (UID: \"b984559e-efdf-4d21-917f-420506f550da\") " pod="openstack/neutron-db-sync-dwpkn" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.636527 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b984559e-efdf-4d21-917f-420506f550da-config\") pod \"neutron-db-sync-dwpkn\" (UID: \"b984559e-efdf-4d21-917f-420506f550da\") " pod="openstack/neutron-db-sync-dwpkn" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.636683 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b984559e-efdf-4d21-917f-420506f550da-combined-ca-bundle\") pod \"neutron-db-sync-dwpkn\" (UID: \"b984559e-efdf-4d21-917f-420506f550da\") " pod="openstack/neutron-db-sync-dwpkn" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.647213 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86bd4959b7-8gj7b"] Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.659931 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f69b87f6c-9xlvs"] Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.662191 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f69b87f6c-9xlvs" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.669897 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c669f5d67-lzz9h"] Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.671806 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.677983 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c669f5d67-lzz9h"] Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.693537 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-h7n5l"] Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.695382 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-h7n5l" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.699100 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-gg7q4" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.699561 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.699891 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.705940 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f69b87f6c-9xlvs"] Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.713001 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-h7n5l"] Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.739884 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7be6d4d-b41b-462c-ac84-16b84a45b63c-run-httpd\") pod \"ceilometer-0\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " pod="openstack/ceilometer-0" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.739939 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7be6d4d-b41b-462c-ac84-16b84a45b63c-log-httpd\") pod \"ceilometer-0\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " pod="openstack/ceilometer-0" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.739982 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td9sg\" (UniqueName: \"kubernetes.io/projected/b984559e-efdf-4d21-917f-420506f550da-kube-api-access-td9sg\") pod \"neutron-db-sync-dwpkn\" (UID: \"b984559e-efdf-4d21-917f-420506f550da\") " pod="openstack/neutron-db-sync-dwpkn" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.740007 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7be6d4d-b41b-462c-ac84-16b84a45b63c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " pod="openstack/ceilometer-0" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.740029 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j5pp\" (UniqueName: \"kubernetes.io/projected/2955103b-2cae-4fe0-8ffe-bbca608cad77-kube-api-access-8j5pp\") pod \"placement-db-sync-h7n5l\" (UID: \"2955103b-2cae-4fe0-8ffe-bbca608cad77\") " pod="openstack/placement-db-sync-h7n5l" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.740059 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b984559e-efdf-4d21-917f-420506f550da-config\") pod \"neutron-db-sync-dwpkn\" (UID: \"b984559e-efdf-4d21-917f-420506f550da\") " pod="openstack/neutron-db-sync-dwpkn" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.740097 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2955103b-2cae-4fe0-8ffe-bbca608cad77-combined-ca-bundle\") pod \"placement-db-sync-h7n5l\" (UID: \"2955103b-2cae-4fe0-8ffe-bbca608cad77\") " pod="openstack/placement-db-sync-h7n5l" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.740134 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7be6d4d-b41b-462c-ac84-16b84a45b63c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " pod="openstack/ceilometer-0" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.740160 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7be6d4d-b41b-462c-ac84-16b84a45b63c-scripts\") pod \"ceilometer-0\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " pod="openstack/ceilometer-0" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.740187 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2955103b-2cae-4fe0-8ffe-bbca608cad77-logs\") pod \"placement-db-sync-h7n5l\" (UID: \"2955103b-2cae-4fe0-8ffe-bbca608cad77\") " pod="openstack/placement-db-sync-h7n5l" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.740206 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b984559e-efdf-4d21-917f-420506f550da-combined-ca-bundle\") pod \"neutron-db-sync-dwpkn\" (UID: \"b984559e-efdf-4d21-917f-420506f550da\") " pod="openstack/neutron-db-sync-dwpkn" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.740223 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2955103b-2cae-4fe0-8ffe-bbca608cad77-config-data\") pod \"placement-db-sync-h7n5l\" (UID: \"2955103b-2cae-4fe0-8ffe-bbca608cad77\") " pod="openstack/placement-db-sync-h7n5l" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.740241 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpw9s\" (UniqueName: \"kubernetes.io/projected/f7be6d4d-b41b-462c-ac84-16b84a45b63c-kube-api-access-gpw9s\") pod \"ceilometer-0\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " pod="openstack/ceilometer-0" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.740269 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2955103b-2cae-4fe0-8ffe-bbca608cad77-scripts\") pod \"placement-db-sync-h7n5l\" (UID: \"2955103b-2cae-4fe0-8ffe-bbca608cad77\") " pod="openstack/placement-db-sync-h7n5l" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.740298 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7be6d4d-b41b-462c-ac84-16b84a45b63c-config-data\") pod \"ceilometer-0\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " pod="openstack/ceilometer-0" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.751872 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b984559e-efdf-4d21-917f-420506f550da-config\") pod \"neutron-db-sync-dwpkn\" (UID: \"b984559e-efdf-4d21-917f-420506f550da\") " pod="openstack/neutron-db-sync-dwpkn" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.759411 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td9sg\" (UniqueName: \"kubernetes.io/projected/b984559e-efdf-4d21-917f-420506f550da-kube-api-access-td9sg\") pod \"neutron-db-sync-dwpkn\" (UID: \"b984559e-efdf-4d21-917f-420506f550da\") " pod="openstack/neutron-db-sync-dwpkn" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.763386 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b984559e-efdf-4d21-917f-420506f550da-combined-ca-bundle\") pod \"neutron-db-sync-dwpkn\" (UID: \"b984559e-efdf-4d21-917f-420506f550da\") " pod="openstack/neutron-db-sync-dwpkn" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.843718 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7be6d4d-b41b-462c-ac84-16b84a45b63c-run-httpd\") pod \"ceilometer-0\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " pod="openstack/ceilometer-0" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.844090 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ca59bae-8d1e-48ac-9fde-00b4482fd916-logs\") pod \"horizon-f69b87f6c-9xlvs\" (UID: \"7ca59bae-8d1e-48ac-9fde-00b4482fd916\") " pod="openstack/horizon-f69b87f6c-9xlvs" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.844144 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7be6d4d-b41b-462c-ac84-16b84a45b63c-log-httpd\") pod \"ceilometer-0\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " pod="openstack/ceilometer-0" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.844249 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7be6d4d-b41b-462c-ac84-16b84a45b63c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " pod="openstack/ceilometer-0" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.844288 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j5pp\" (UniqueName: \"kubernetes.io/projected/2955103b-2cae-4fe0-8ffe-bbca608cad77-kube-api-access-8j5pp\") pod \"placement-db-sync-h7n5l\" (UID: \"2955103b-2cae-4fe0-8ffe-bbca608cad77\") " pod="openstack/placement-db-sync-h7n5l" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.844391 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-config\") pod \"dnsmasq-dns-c669f5d67-lzz9h\" (UID: \"f3581c1d-97bc-41ba-80d4-89c0362131f7\") " pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.844446 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ca59bae-8d1e-48ac-9fde-00b4482fd916-config-data\") pod \"horizon-f69b87f6c-9xlvs\" (UID: \"7ca59bae-8d1e-48ac-9fde-00b4482fd916\") " pod="openstack/horizon-f69b87f6c-9xlvs" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.844493 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2955103b-2cae-4fe0-8ffe-bbca608cad77-combined-ca-bundle\") pod \"placement-db-sync-h7n5l\" (UID: \"2955103b-2cae-4fe0-8ffe-bbca608cad77\") " pod="openstack/placement-db-sync-h7n5l" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.844523 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ca59bae-8d1e-48ac-9fde-00b4482fd916-scripts\") pod \"horizon-f69b87f6c-9xlvs\" (UID: \"7ca59bae-8d1e-48ac-9fde-00b4482fd916\") " pod="openstack/horizon-f69b87f6c-9xlvs" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.844562 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-dns-swift-storage-0\") pod \"dnsmasq-dns-c669f5d67-lzz9h\" (UID: \"f3581c1d-97bc-41ba-80d4-89c0362131f7\") " pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.844612 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-ovsdbserver-nb\") pod \"dnsmasq-dns-c669f5d67-lzz9h\" (UID: \"f3581c1d-97bc-41ba-80d4-89c0362131f7\") " pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.844683 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7be6d4d-b41b-462c-ac84-16b84a45b63c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " pod="openstack/ceilometer-0" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.844730 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7be6d4d-b41b-462c-ac84-16b84a45b63c-scripts\") pod \"ceilometer-0\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " pod="openstack/ceilometer-0" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.844761 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-dns-svc\") pod \"dnsmasq-dns-c669f5d67-lzz9h\" (UID: \"f3581c1d-97bc-41ba-80d4-89c0362131f7\") " pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.844815 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2955103b-2cae-4fe0-8ffe-bbca608cad77-logs\") pod \"placement-db-sync-h7n5l\" (UID: \"2955103b-2cae-4fe0-8ffe-bbca608cad77\") " pod="openstack/placement-db-sync-h7n5l" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.844974 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2955103b-2cae-4fe0-8ffe-bbca608cad77-config-data\") pod \"placement-db-sync-h7n5l\" (UID: \"2955103b-2cae-4fe0-8ffe-bbca608cad77\") " pod="openstack/placement-db-sync-h7n5l" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.845008 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpw9s\" (UniqueName: \"kubernetes.io/projected/f7be6d4d-b41b-462c-ac84-16b84a45b63c-kube-api-access-gpw9s\") pod \"ceilometer-0\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " pod="openstack/ceilometer-0" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.845044 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7ca59bae-8d1e-48ac-9fde-00b4482fd916-horizon-secret-key\") pod \"horizon-f69b87f6c-9xlvs\" (UID: \"7ca59bae-8d1e-48ac-9fde-00b4482fd916\") " pod="openstack/horizon-f69b87f6c-9xlvs" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.845088 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-ovsdbserver-sb\") pod \"dnsmasq-dns-c669f5d67-lzz9h\" (UID: \"f3581c1d-97bc-41ba-80d4-89c0362131f7\") " pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.845123 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2955103b-2cae-4fe0-8ffe-bbca608cad77-scripts\") pod \"placement-db-sync-h7n5l\" (UID: \"2955103b-2cae-4fe0-8ffe-bbca608cad77\") " pod="openstack/placement-db-sync-h7n5l" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.845193 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7be6d4d-b41b-462c-ac84-16b84a45b63c-config-data\") pod \"ceilometer-0\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " pod="openstack/ceilometer-0" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.845246 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c7jm\" (UniqueName: \"kubernetes.io/projected/f3581c1d-97bc-41ba-80d4-89c0362131f7-kube-api-access-9c7jm\") pod \"dnsmasq-dns-c669f5d67-lzz9h\" (UID: \"f3581c1d-97bc-41ba-80d4-89c0362131f7\") " pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.845281 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5f6l\" (UniqueName: \"kubernetes.io/projected/7ca59bae-8d1e-48ac-9fde-00b4482fd916-kube-api-access-s5f6l\") pod \"horizon-f69b87f6c-9xlvs\" (UID: \"7ca59bae-8d1e-48ac-9fde-00b4482fd916\") " pod="openstack/horizon-f69b87f6c-9xlvs" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.847986 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7be6d4d-b41b-462c-ac84-16b84a45b63c-log-httpd\") pod \"ceilometer-0\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " pod="openstack/ceilometer-0" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.850800 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2955103b-2cae-4fe0-8ffe-bbca608cad77-logs\") pod \"placement-db-sync-h7n5l\" (UID: \"2955103b-2cae-4fe0-8ffe-bbca608cad77\") " pod="openstack/placement-db-sync-h7n5l" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.850837 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7be6d4d-b41b-462c-ac84-16b84a45b63c-run-httpd\") pod \"ceilometer-0\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " pod="openstack/ceilometer-0" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.852574 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7be6d4d-b41b-462c-ac84-16b84a45b63c-scripts\") pod \"ceilometer-0\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " pod="openstack/ceilometer-0" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.856859 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7be6d4d-b41b-462c-ac84-16b84a45b63c-config-data\") pod \"ceilometer-0\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " pod="openstack/ceilometer-0" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.864001 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7be6d4d-b41b-462c-ac84-16b84a45b63c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " pod="openstack/ceilometer-0" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.866766 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2955103b-2cae-4fe0-8ffe-bbca608cad77-config-data\") pod \"placement-db-sync-h7n5l\" (UID: \"2955103b-2cae-4fe0-8ffe-bbca608cad77\") " pod="openstack/placement-db-sync-h7n5l" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.871495 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2955103b-2cae-4fe0-8ffe-bbca608cad77-combined-ca-bundle\") pod \"placement-db-sync-h7n5l\" (UID: \"2955103b-2cae-4fe0-8ffe-bbca608cad77\") " pod="openstack/placement-db-sync-h7n5l" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.871789 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpw9s\" (UniqueName: \"kubernetes.io/projected/f7be6d4d-b41b-462c-ac84-16b84a45b63c-kube-api-access-gpw9s\") pod \"ceilometer-0\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " pod="openstack/ceilometer-0" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.872801 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7be6d4d-b41b-462c-ac84-16b84a45b63c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " pod="openstack/ceilometer-0" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.873106 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2955103b-2cae-4fe0-8ffe-bbca608cad77-scripts\") pod \"placement-db-sync-h7n5l\" (UID: \"2955103b-2cae-4fe0-8ffe-bbca608cad77\") " pod="openstack/placement-db-sync-h7n5l" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.891370 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j5pp\" (UniqueName: \"kubernetes.io/projected/2955103b-2cae-4fe0-8ffe-bbca608cad77-kube-api-access-8j5pp\") pod \"placement-db-sync-h7n5l\" (UID: \"2955103b-2cae-4fe0-8ffe-bbca608cad77\") " pod="openstack/placement-db-sync-h7n5l" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.896065 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dwpkn" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.952278 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-config\") pod \"dnsmasq-dns-c669f5d67-lzz9h\" (UID: \"f3581c1d-97bc-41ba-80d4-89c0362131f7\") " pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.952329 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ca59bae-8d1e-48ac-9fde-00b4482fd916-config-data\") pod \"horizon-f69b87f6c-9xlvs\" (UID: \"7ca59bae-8d1e-48ac-9fde-00b4482fd916\") " pod="openstack/horizon-f69b87f6c-9xlvs" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.952352 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ca59bae-8d1e-48ac-9fde-00b4482fd916-scripts\") pod \"horizon-f69b87f6c-9xlvs\" (UID: \"7ca59bae-8d1e-48ac-9fde-00b4482fd916\") " pod="openstack/horizon-f69b87f6c-9xlvs" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.952372 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-dns-swift-storage-0\") pod \"dnsmasq-dns-c669f5d67-lzz9h\" (UID: \"f3581c1d-97bc-41ba-80d4-89c0362131f7\") " pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.952399 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-ovsdbserver-nb\") pod \"dnsmasq-dns-c669f5d67-lzz9h\" (UID: \"f3581c1d-97bc-41ba-80d4-89c0362131f7\") " pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.952427 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-dns-svc\") pod \"dnsmasq-dns-c669f5d67-lzz9h\" (UID: \"f3581c1d-97bc-41ba-80d4-89c0362131f7\") " pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.952457 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7ca59bae-8d1e-48ac-9fde-00b4482fd916-horizon-secret-key\") pod \"horizon-f69b87f6c-9xlvs\" (UID: \"7ca59bae-8d1e-48ac-9fde-00b4482fd916\") " pod="openstack/horizon-f69b87f6c-9xlvs" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.952478 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-ovsdbserver-sb\") pod \"dnsmasq-dns-c669f5d67-lzz9h\" (UID: \"f3581c1d-97bc-41ba-80d4-89c0362131f7\") " pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.952524 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c7jm\" (UniqueName: \"kubernetes.io/projected/f3581c1d-97bc-41ba-80d4-89c0362131f7-kube-api-access-9c7jm\") pod \"dnsmasq-dns-c669f5d67-lzz9h\" (UID: \"f3581c1d-97bc-41ba-80d4-89c0362131f7\") " pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.952547 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5f6l\" (UniqueName: \"kubernetes.io/projected/7ca59bae-8d1e-48ac-9fde-00b4482fd916-kube-api-access-s5f6l\") pod \"horizon-f69b87f6c-9xlvs\" (UID: \"7ca59bae-8d1e-48ac-9fde-00b4482fd916\") " pod="openstack/horizon-f69b87f6c-9xlvs" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.952571 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ca59bae-8d1e-48ac-9fde-00b4482fd916-logs\") pod \"horizon-f69b87f6c-9xlvs\" (UID: \"7ca59bae-8d1e-48ac-9fde-00b4482fd916\") " pod="openstack/horizon-f69b87f6c-9xlvs" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.953059 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ca59bae-8d1e-48ac-9fde-00b4482fd916-logs\") pod \"horizon-f69b87f6c-9xlvs\" (UID: \"7ca59bae-8d1e-48ac-9fde-00b4482fd916\") " pod="openstack/horizon-f69b87f6c-9xlvs" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.953678 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-dns-svc\") pod \"dnsmasq-dns-c669f5d67-lzz9h\" (UID: \"f3581c1d-97bc-41ba-80d4-89c0362131f7\") " pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.953703 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-ovsdbserver-nb\") pod \"dnsmasq-dns-c669f5d67-lzz9h\" (UID: \"f3581c1d-97bc-41ba-80d4-89c0362131f7\") " pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.954472 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-config\") pod \"dnsmasq-dns-c669f5d67-lzz9h\" (UID: \"f3581c1d-97bc-41ba-80d4-89c0362131f7\") " pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.955928 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ca59bae-8d1e-48ac-9fde-00b4482fd916-config-data\") pod \"horizon-f69b87f6c-9xlvs\" (UID: \"7ca59bae-8d1e-48ac-9fde-00b4482fd916\") " pod="openstack/horizon-f69b87f6c-9xlvs" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.957078 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-ovsdbserver-sb\") pod \"dnsmasq-dns-c669f5d67-lzz9h\" (UID: \"f3581c1d-97bc-41ba-80d4-89c0362131f7\") " pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.957810 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-dns-swift-storage-0\") pod \"dnsmasq-dns-c669f5d67-lzz9h\" (UID: \"f3581c1d-97bc-41ba-80d4-89c0362131f7\") " pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.961561 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ca59bae-8d1e-48ac-9fde-00b4482fd916-scripts\") pod \"horizon-f69b87f6c-9xlvs\" (UID: \"7ca59bae-8d1e-48ac-9fde-00b4482fd916\") " pod="openstack/horizon-f69b87f6c-9xlvs" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.965343 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7ca59bae-8d1e-48ac-9fde-00b4482fd916-horizon-secret-key\") pod \"horizon-f69b87f6c-9xlvs\" (UID: \"7ca59bae-8d1e-48ac-9fde-00b4482fd916\") " pod="openstack/horizon-f69b87f6c-9xlvs" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.981964 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5f6l\" (UniqueName: \"kubernetes.io/projected/7ca59bae-8d1e-48ac-9fde-00b4482fd916-kube-api-access-s5f6l\") pod \"horizon-f69b87f6c-9xlvs\" (UID: \"7ca59bae-8d1e-48ac-9fde-00b4482fd916\") " pod="openstack/horizon-f69b87f6c-9xlvs" Dec 06 06:45:39 crc kubenswrapper[4823]: I1206 06:45:39.990555 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c7jm\" (UniqueName: \"kubernetes.io/projected/f3581c1d-97bc-41ba-80d4-89c0362131f7-kube-api-access-9c7jm\") pod \"dnsmasq-dns-c669f5d67-lzz9h\" (UID: \"f3581c1d-97bc-41ba-80d4-89c0362131f7\") " pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" Dec 06 06:45:40 crc kubenswrapper[4823]: I1206 06:45:40.028374 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 06:45:40 crc kubenswrapper[4823]: I1206 06:45:40.062677 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f69b87f6c-9xlvs" Dec 06 06:45:40 crc kubenswrapper[4823]: I1206 06:45:40.066958 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86bd4959b7-8gj7b"] Dec 06 06:45:40 crc kubenswrapper[4823]: I1206 06:45:40.104070 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" Dec 06 06:45:40 crc kubenswrapper[4823]: I1206 06:45:40.123258 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-h7n5l" Dec 06 06:45:40 crc kubenswrapper[4823]: I1206 06:45:40.163977 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ktsq4"] Dec 06 06:45:40 crc kubenswrapper[4823]: I1206 06:45:40.360547 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f78c4d669-7mcnd"] Dec 06 06:45:40 crc kubenswrapper[4823]: I1206 06:45:40.369058 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-x7fxv"] Dec 06 06:45:41 crc kubenswrapper[4823]: W1206 06:45:41.101981 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bdf563e_881e_4e35_a95e_8ac49f9c498e.slice/crio-945fd5a0d1b99c2084853921bff817db48d00b7e74f74b95b18783154ed4da5b WatchSource:0}: Error finding container 945fd5a0d1b99c2084853921bff817db48d00b7e74f74b95b18783154ed4da5b: Status 404 returned error can't find the container with id 945fd5a0d1b99c2084853921bff817db48d00b7e74f74b95b18783154ed4da5b Dec 06 06:45:41 crc kubenswrapper[4823]: W1206 06:45:41.330785 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03a2a291_9168_4869_9c60_f7c281733b5b.slice/crio-8e8e059aef6a1f364a8623d4ad4b30d6557e191bbc5fa276bab7404dd76cc3b6 WatchSource:0}: Error finding container 8e8e059aef6a1f364a8623d4ad4b30d6557e191bbc5fa276bab7404dd76cc3b6: Status 404 returned error can't find the container with id 8e8e059aef6a1f364a8623d4ad4b30d6557e191bbc5fa276bab7404dd76cc3b6 Dec 06 06:45:41 crc kubenswrapper[4823]: I1206 06:45:41.497618 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x7fxv" event={"ID":"3d04e917-34c8-4df1-bc89-69ca7b7753ac","Type":"ContainerStarted","Data":"efbd19c82255fe05bb5d3a279a978671acf2d85316d8bcf93f2fcfe50558d88c"} Dec 06 06:45:41 crc kubenswrapper[4823]: I1206 06:45:41.556695 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f78c4d669-7mcnd" event={"ID":"786cb861-417a-49bc-a619-afb242b5d8c2","Type":"ContainerStarted","Data":"f3c360759f1b34789e9891650c178925ffebdc33758e32e793e7c11f44f899d4"} Dec 06 06:45:41 crc kubenswrapper[4823]: I1206 06:45:41.574234 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86bd4959b7-8gj7b" event={"ID":"0bdf563e-881e-4e35-a95e-8ac49f9c498e","Type":"ContainerStarted","Data":"945fd5a0d1b99c2084853921bff817db48d00b7e74f74b95b18783154ed4da5b"} Dec 06 06:45:41 crc kubenswrapper[4823]: I1206 06:45:41.576418 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ktsq4" event={"ID":"03a2a291-9168-4869-9c60-f7c281733b5b","Type":"ContainerStarted","Data":"8e8e059aef6a1f364a8623d4ad4b30d6557e191bbc5fa276bab7404dd76cc3b6"} Dec 06 06:45:41 crc kubenswrapper[4823]: I1206 06:45:41.597886 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-kls2x"] Dec 06 06:45:41 crc kubenswrapper[4823]: W1206 06:45:41.658608 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod157d2d95_42a3_4f80_8c1d_b8c27bee49be.slice/crio-e441d168657a75267a532efa90115be581802509a700ac48db39a9349a69e2c2 WatchSource:0}: Error finding container e441d168657a75267a532efa90115be581802509a700ac48db39a9349a69e2c2: Status 404 returned error can't find the container with id e441d168657a75267a532efa90115be581802509a700ac48db39a9349a69e2c2 Dec 06 06:45:41 crc kubenswrapper[4823]: I1206 06:45:41.786942 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f78c4d669-7mcnd"] Dec 06 06:45:41 crc kubenswrapper[4823]: I1206 06:45:41.808086 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-88df94445-p9p69"] Dec 06 06:45:41 crc kubenswrapper[4823]: I1206 06:45:41.815446 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-88df94445-p9p69" Dec 06 06:45:41 crc kubenswrapper[4823]: I1206 06:45:41.846036 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-88df94445-p9p69"] Dec 06 06:45:41 crc kubenswrapper[4823]: I1206 06:45:41.884043 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.015028 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-scripts\") pod \"horizon-88df94445-p9p69\" (UID: \"597c7a12-cbe9-4e65-b536-1bb49f1f36a2\") " pod="openstack/horizon-88df94445-p9p69" Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.015094 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbzc9\" (UniqueName: \"kubernetes.io/projected/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-kube-api-access-pbzc9\") pod \"horizon-88df94445-p9p69\" (UID: \"597c7a12-cbe9-4e65-b536-1bb49f1f36a2\") " pod="openstack/horizon-88df94445-p9p69" Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.015150 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-logs\") pod \"horizon-88df94445-p9p69\" (UID: \"597c7a12-cbe9-4e65-b536-1bb49f1f36a2\") " pod="openstack/horizon-88df94445-p9p69" Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.015183 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-config-data\") pod \"horizon-88df94445-p9p69\" (UID: \"597c7a12-cbe9-4e65-b536-1bb49f1f36a2\") " pod="openstack/horizon-88df94445-p9p69" Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.015244 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-horizon-secret-key\") pod \"horizon-88df94445-p9p69\" (UID: \"597c7a12-cbe9-4e65-b536-1bb49f1f36a2\") " pod="openstack/horizon-88df94445-p9p69" Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.117206 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-scripts\") pod \"horizon-88df94445-p9p69\" (UID: \"597c7a12-cbe9-4e65-b536-1bb49f1f36a2\") " pod="openstack/horizon-88df94445-p9p69" Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.117486 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbzc9\" (UniqueName: \"kubernetes.io/projected/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-kube-api-access-pbzc9\") pod \"horizon-88df94445-p9p69\" (UID: \"597c7a12-cbe9-4e65-b536-1bb49f1f36a2\") " pod="openstack/horizon-88df94445-p9p69" Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.117593 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-logs\") pod \"horizon-88df94445-p9p69\" (UID: \"597c7a12-cbe9-4e65-b536-1bb49f1f36a2\") " pod="openstack/horizon-88df94445-p9p69" Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.117627 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-config-data\") pod \"horizon-88df94445-p9p69\" (UID: \"597c7a12-cbe9-4e65-b536-1bb49f1f36a2\") " pod="openstack/horizon-88df94445-p9p69" Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.117744 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-horizon-secret-key\") pod \"horizon-88df94445-p9p69\" (UID: \"597c7a12-cbe9-4e65-b536-1bb49f1f36a2\") " pod="openstack/horizon-88df94445-p9p69" Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.128894 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-logs\") pod \"horizon-88df94445-p9p69\" (UID: \"597c7a12-cbe9-4e65-b536-1bb49f1f36a2\") " pod="openstack/horizon-88df94445-p9p69" Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.130026 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-horizon-secret-key\") pod \"horizon-88df94445-p9p69\" (UID: \"597c7a12-cbe9-4e65-b536-1bb49f1f36a2\") " pod="openstack/horizon-88df94445-p9p69" Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.133581 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-scripts\") pod \"horizon-88df94445-p9p69\" (UID: \"597c7a12-cbe9-4e65-b536-1bb49f1f36a2\") " pod="openstack/horizon-88df94445-p9p69" Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.134058 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-config-data\") pod \"horizon-88df94445-p9p69\" (UID: \"597c7a12-cbe9-4e65-b536-1bb49f1f36a2\") " pod="openstack/horizon-88df94445-p9p69" Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.140737 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.152688 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-h7n5l"] Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.153039 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbzc9\" (UniqueName: \"kubernetes.io/projected/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-kube-api-access-pbzc9\") pod \"horizon-88df94445-p9p69\" (UID: \"597c7a12-cbe9-4e65-b536-1bb49f1f36a2\") " pod="openstack/horizon-88df94445-p9p69" Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.168557 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-88df94445-p9p69" Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.227321 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-dwpkn"] Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.549279 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c669f5d67-lzz9h"] Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.577441 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f69b87f6c-9xlvs"] Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.582763 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:42 crc kubenswrapper[4823]: W1206 06:45:42.584450 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3581c1d_97bc_41ba_80d4_89c0362131f7.slice/crio-2e3f60526b23a86cc6c8dda197919af32fa0e35f928515ecd71834c39fd76cbf WatchSource:0}: Error finding container 2e3f60526b23a86cc6c8dda197919af32fa0e35f928515ecd71834c39fd76cbf: Status 404 returned error can't find the container with id 2e3f60526b23a86cc6c8dda197919af32fa0e35f928515ecd71834c39fd76cbf Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.593404 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.595344 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ktsq4" event={"ID":"03a2a291-9168-4869-9c60-f7c281733b5b","Type":"ContainerStarted","Data":"384b8c89c4ec153f147be679258ed92270d1089bbcb20e479a2687ddc87bacf8"} Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.596904 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-h7n5l" event={"ID":"2955103b-2cae-4fe0-8ffe-bbca608cad77","Type":"ContainerStarted","Data":"18d758e9ed541e7973f16c018ece0c84bf6aa7109d3d367225aeac5d08c445a8"} Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.599440 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dwpkn" event={"ID":"b984559e-efdf-4d21-917f-420506f550da","Type":"ContainerStarted","Data":"a535cee0854520719889f2127987bd4c13c061da8068e383c2f7a2adb6cd1b36"} Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.601451 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kls2x" event={"ID":"157d2d95-42a3-4f80-8c1d-b8c27bee49be","Type":"ContainerStarted","Data":"e441d168657a75267a532efa90115be581802509a700ac48db39a9349a69e2c2"} Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.629159 4823 generic.go:334] "Generic (PLEG): container finished" podID="0bdf563e-881e-4e35-a95e-8ac49f9c498e" containerID="349688c9a49fc070cc145e8bf4f740030951d0e83057e06930234db5ce5bdc27" exitCode=0 Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.629286 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86bd4959b7-8gj7b" event={"ID":"0bdf563e-881e-4e35-a95e-8ac49f9c498e","Type":"ContainerDied","Data":"349688c9a49fc070cc145e8bf4f740030951d0e83057e06930234db5ce5bdc27"} Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.653983 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7be6d4d-b41b-462c-ac84-16b84a45b63c","Type":"ContainerStarted","Data":"4a8bfd6bda53bff6df1e180820c142fc287cf303cbe4b685c129cec4724dd201"} Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.693223 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-ktsq4" podStartSLOduration=4.693194124 podStartE2EDuration="4.693194124s" podCreationTimestamp="2025-12-06 06:45:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:45:42.673509065 +0000 UTC m=+1243.959261025" watchObservedRunningTime="2025-12-06 06:45:42.693194124 +0000 UTC m=+1243.978946084" Dec 06 06:45:42 crc kubenswrapper[4823]: I1206 06:45:42.780172 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-88df94445-p9p69"] Dec 06 06:45:43 crc kubenswrapper[4823]: I1206 06:45:43.669309 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-88df94445-p9p69" event={"ID":"597c7a12-cbe9-4e65-b536-1bb49f1f36a2","Type":"ContainerStarted","Data":"61be5ed1a2867f6d5acff740156bdc1949ae6253b425a97363413e32015f148f"} Dec 06 06:45:43 crc kubenswrapper[4823]: I1206 06:45:43.671829 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f69b87f6c-9xlvs" event={"ID":"7ca59bae-8d1e-48ac-9fde-00b4482fd916","Type":"ContainerStarted","Data":"29db22d5d84618ca8842f4e24ca82bf550459885c9b3b77bdfaefb2bd6b5543e"} Dec 06 06:45:43 crc kubenswrapper[4823]: I1206 06:45:43.673957 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dwpkn" event={"ID":"b984559e-efdf-4d21-917f-420506f550da","Type":"ContainerStarted","Data":"45c2548ae54254ed1b411a8df02203fa9d6a360e80300e3ebd0ebb4d1550db82"} Dec 06 06:45:43 crc kubenswrapper[4823]: I1206 06:45:43.680694 4823 generic.go:334] "Generic (PLEG): container finished" podID="f3581c1d-97bc-41ba-80d4-89c0362131f7" containerID="3c5ff6eb0153aab2547a4b8810250dbdf697cbec57b0746a908a18fbdc06b5cb" exitCode=0 Dec 06 06:45:43 crc kubenswrapper[4823]: I1206 06:45:43.682571 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" event={"ID":"f3581c1d-97bc-41ba-80d4-89c0362131f7","Type":"ContainerDied","Data":"3c5ff6eb0153aab2547a4b8810250dbdf697cbec57b0746a908a18fbdc06b5cb"} Dec 06 06:45:43 crc kubenswrapper[4823]: I1206 06:45:43.682613 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" event={"ID":"f3581c1d-97bc-41ba-80d4-89c0362131f7","Type":"ContainerStarted","Data":"2e3f60526b23a86cc6c8dda197919af32fa0e35f928515ecd71834c39fd76cbf"} Dec 06 06:45:43 crc kubenswrapper[4823]: I1206 06:45:43.694877 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 06 06:45:43 crc kubenswrapper[4823]: I1206 06:45:43.709519 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-dwpkn" podStartSLOduration=5.70949572 podStartE2EDuration="5.70949572s" podCreationTimestamp="2025-12-06 06:45:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:45:43.697411831 +0000 UTC m=+1244.983163801" watchObservedRunningTime="2025-12-06 06:45:43.70949572 +0000 UTC m=+1244.995247680" Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.133049 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bd4959b7-8gj7b" Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.200081 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-config\") pod \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\" (UID: \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\") " Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.200247 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-ovsdbserver-nb\") pod \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\" (UID: \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\") " Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.200287 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-dns-svc\") pod \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\" (UID: \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\") " Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.200402 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-dns-swift-storage-0\") pod \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\" (UID: \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\") " Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.200435 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-ovsdbserver-sb\") pod \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\" (UID: \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\") " Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.200583 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2klxp\" (UniqueName: \"kubernetes.io/projected/0bdf563e-881e-4e35-a95e-8ac49f9c498e-kube-api-access-2klxp\") pod \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\" (UID: \"0bdf563e-881e-4e35-a95e-8ac49f9c498e\") " Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.206442 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bdf563e-881e-4e35-a95e-8ac49f9c498e-kube-api-access-2klxp" (OuterVolumeSpecName: "kube-api-access-2klxp") pod "0bdf563e-881e-4e35-a95e-8ac49f9c498e" (UID: "0bdf563e-881e-4e35-a95e-8ac49f9c498e"). InnerVolumeSpecName "kube-api-access-2klxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.230762 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0bdf563e-881e-4e35-a95e-8ac49f9c498e" (UID: "0bdf563e-881e-4e35-a95e-8ac49f9c498e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.233350 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0bdf563e-881e-4e35-a95e-8ac49f9c498e" (UID: "0bdf563e-881e-4e35-a95e-8ac49f9c498e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.256846 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-config" (OuterVolumeSpecName: "config") pod "0bdf563e-881e-4e35-a95e-8ac49f9c498e" (UID: "0bdf563e-881e-4e35-a95e-8ac49f9c498e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.273478 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0bdf563e-881e-4e35-a95e-8ac49f9c498e" (UID: "0bdf563e-881e-4e35-a95e-8ac49f9c498e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.278229 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0bdf563e-881e-4e35-a95e-8ac49f9c498e" (UID: "0bdf563e-881e-4e35-a95e-8ac49f9c498e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.302972 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2klxp\" (UniqueName: \"kubernetes.io/projected/0bdf563e-881e-4e35-a95e-8ac49f9c498e-kube-api-access-2klxp\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.303030 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.303045 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.303055 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.303065 4823 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.303073 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0bdf563e-881e-4e35-a95e-8ac49f9c498e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.696529 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86bd4959b7-8gj7b" event={"ID":"0bdf563e-881e-4e35-a95e-8ac49f9c498e","Type":"ContainerDied","Data":"945fd5a0d1b99c2084853921bff817db48d00b7e74f74b95b18783154ed4da5b"} Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.696941 4823 scope.go:117] "RemoveContainer" containerID="349688c9a49fc070cc145e8bf4f740030951d0e83057e06930234db5ce5bdc27" Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.697171 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bd4959b7-8gj7b" Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.713770 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" event={"ID":"f3581c1d-97bc-41ba-80d4-89c0362131f7","Type":"ContainerStarted","Data":"b7be0bf016888fe1effff48cefc952a6474ebc37a07f703469d2b5bb5f921a2e"} Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.714717 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.742324 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" podStartSLOduration=5.742270623 podStartE2EDuration="5.742270623s" podCreationTimestamp="2025-12-06 06:45:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:45:44.736932358 +0000 UTC m=+1246.022684328" watchObservedRunningTime="2025-12-06 06:45:44.742270623 +0000 UTC m=+1246.028022583" Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.806732 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86bd4959b7-8gj7b"] Dec 06 06:45:44 crc kubenswrapper[4823]: I1206 06:45:44.845227 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86bd4959b7-8gj7b"] Dec 06 06:45:45 crc kubenswrapper[4823]: I1206 06:45:45.157651 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bdf563e-881e-4e35-a95e-8ac49f9c498e" path="/var/lib/kubelet/pods/0bdf563e-881e-4e35-a95e-8ac49f9c498e/volumes" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.497315 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f69b87f6c-9xlvs"] Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.534941 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-cdc5bf4b4-qft5r"] Dec 06 06:45:48 crc kubenswrapper[4823]: E1206 06:45:48.535598 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bdf563e-881e-4e35-a95e-8ac49f9c498e" containerName="init" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.535621 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bdf563e-881e-4e35-a95e-8ac49f9c498e" containerName="init" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.535876 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bdf563e-881e-4e35-a95e-8ac49f9c498e" containerName="init" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.545531 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.549449 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.567442 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cdc5bf4b4-qft5r"] Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.645738 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-88df94445-p9p69"] Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.650319 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq2fv\" (UniqueName: \"kubernetes.io/projected/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-kube-api-access-dq2fv\") pod \"horizon-cdc5bf4b4-qft5r\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.650398 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-scripts\") pod \"horizon-cdc5bf4b4-qft5r\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.650425 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-combined-ca-bundle\") pod \"horizon-cdc5bf4b4-qft5r\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.650651 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-config-data\") pod \"horizon-cdc5bf4b4-qft5r\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.650813 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-horizon-secret-key\") pod \"horizon-cdc5bf4b4-qft5r\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.650865 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-horizon-tls-certs\") pod \"horizon-cdc5bf4b4-qft5r\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.650969 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-logs\") pod \"horizon-cdc5bf4b4-qft5r\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.665929 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5dcc5c8c58-p6xlr"] Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.672458 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.691442 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5dcc5c8c58-p6xlr"] Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.753918 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-horizon-secret-key\") pod \"horizon-cdc5bf4b4-qft5r\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.753974 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-horizon-tls-certs\") pod \"horizon-cdc5bf4b4-qft5r\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.754008 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4f410137-3943-4e5f-890f-d7f54e165884-horizon-secret-key\") pod \"horizon-5dcc5c8c58-p6xlr\" (UID: \"4f410137-3943-4e5f-890f-d7f54e165884\") " pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.754055 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-logs\") pod \"horizon-cdc5bf4b4-qft5r\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.754086 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f410137-3943-4e5f-890f-d7f54e165884-horizon-tls-certs\") pod \"horizon-5dcc5c8c58-p6xlr\" (UID: \"4f410137-3943-4e5f-890f-d7f54e165884\") " pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.754483 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f410137-3943-4e5f-890f-d7f54e165884-combined-ca-bundle\") pod \"horizon-5dcc5c8c58-p6xlr\" (UID: \"4f410137-3943-4e5f-890f-d7f54e165884\") " pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.754690 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f410137-3943-4e5f-890f-d7f54e165884-scripts\") pod \"horizon-5dcc5c8c58-p6xlr\" (UID: \"4f410137-3943-4e5f-890f-d7f54e165884\") " pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.754733 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f410137-3943-4e5f-890f-d7f54e165884-config-data\") pod \"horizon-5dcc5c8c58-p6xlr\" (UID: \"4f410137-3943-4e5f-890f-d7f54e165884\") " pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.754753 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-logs\") pod \"horizon-cdc5bf4b4-qft5r\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.754798 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq2fv\" (UniqueName: \"kubernetes.io/projected/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-kube-api-access-dq2fv\") pod \"horizon-cdc5bf4b4-qft5r\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.754858 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f410137-3943-4e5f-890f-d7f54e165884-logs\") pod \"horizon-5dcc5c8c58-p6xlr\" (UID: \"4f410137-3943-4e5f-890f-d7f54e165884\") " pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.754890 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-scripts\") pod \"horizon-cdc5bf4b4-qft5r\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.754909 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-combined-ca-bundle\") pod \"horizon-cdc5bf4b4-qft5r\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.754983 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzmdd\" (UniqueName: \"kubernetes.io/projected/4f410137-3943-4e5f-890f-d7f54e165884-kube-api-access-mzmdd\") pod \"horizon-5dcc5c8c58-p6xlr\" (UID: \"4f410137-3943-4e5f-890f-d7f54e165884\") " pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.755031 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-config-data\") pod \"horizon-cdc5bf4b4-qft5r\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.755543 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-scripts\") pod \"horizon-cdc5bf4b4-qft5r\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.761170 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-config-data\") pod \"horizon-cdc5bf4b4-qft5r\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.762256 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-horizon-tls-certs\") pod \"horizon-cdc5bf4b4-qft5r\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.763100 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-horizon-secret-key\") pod \"horizon-cdc5bf4b4-qft5r\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.770566 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-combined-ca-bundle\") pod \"horizon-cdc5bf4b4-qft5r\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.787319 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq2fv\" (UniqueName: \"kubernetes.io/projected/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-kube-api-access-dq2fv\") pod \"horizon-cdc5bf4b4-qft5r\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.858396 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f410137-3943-4e5f-890f-d7f54e165884-scripts\") pod \"horizon-5dcc5c8c58-p6xlr\" (UID: \"4f410137-3943-4e5f-890f-d7f54e165884\") " pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.858477 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f410137-3943-4e5f-890f-d7f54e165884-config-data\") pod \"horizon-5dcc5c8c58-p6xlr\" (UID: \"4f410137-3943-4e5f-890f-d7f54e165884\") " pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.858535 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f410137-3943-4e5f-890f-d7f54e165884-logs\") pod \"horizon-5dcc5c8c58-p6xlr\" (UID: \"4f410137-3943-4e5f-890f-d7f54e165884\") " pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.858590 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzmdd\" (UniqueName: \"kubernetes.io/projected/4f410137-3943-4e5f-890f-d7f54e165884-kube-api-access-mzmdd\") pod \"horizon-5dcc5c8c58-p6xlr\" (UID: \"4f410137-3943-4e5f-890f-d7f54e165884\") " pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.858683 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4f410137-3943-4e5f-890f-d7f54e165884-horizon-secret-key\") pod \"horizon-5dcc5c8c58-p6xlr\" (UID: \"4f410137-3943-4e5f-890f-d7f54e165884\") " pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.858738 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f410137-3943-4e5f-890f-d7f54e165884-horizon-tls-certs\") pod \"horizon-5dcc5c8c58-p6xlr\" (UID: \"4f410137-3943-4e5f-890f-d7f54e165884\") " pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.858798 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f410137-3943-4e5f-890f-d7f54e165884-combined-ca-bundle\") pod \"horizon-5dcc5c8c58-p6xlr\" (UID: \"4f410137-3943-4e5f-890f-d7f54e165884\") " pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.859381 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f410137-3943-4e5f-890f-d7f54e165884-scripts\") pod \"horizon-5dcc5c8c58-p6xlr\" (UID: \"4f410137-3943-4e5f-890f-d7f54e165884\") " pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.860252 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f410137-3943-4e5f-890f-d7f54e165884-logs\") pod \"horizon-5dcc5c8c58-p6xlr\" (UID: \"4f410137-3943-4e5f-890f-d7f54e165884\") " pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.860530 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f410137-3943-4e5f-890f-d7f54e165884-config-data\") pod \"horizon-5dcc5c8c58-p6xlr\" (UID: \"4f410137-3943-4e5f-890f-d7f54e165884\") " pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.864887 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f410137-3943-4e5f-890f-d7f54e165884-horizon-tls-certs\") pod \"horizon-5dcc5c8c58-p6xlr\" (UID: \"4f410137-3943-4e5f-890f-d7f54e165884\") " pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.869942 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f410137-3943-4e5f-890f-d7f54e165884-combined-ca-bundle\") pod \"horizon-5dcc5c8c58-p6xlr\" (UID: \"4f410137-3943-4e5f-890f-d7f54e165884\") " pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.881335 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4f410137-3943-4e5f-890f-d7f54e165884-horizon-secret-key\") pod \"horizon-5dcc5c8c58-p6xlr\" (UID: \"4f410137-3943-4e5f-890f-d7f54e165884\") " pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.881973 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzmdd\" (UniqueName: \"kubernetes.io/projected/4f410137-3943-4e5f-890f-d7f54e165884-kube-api-access-mzmdd\") pod \"horizon-5dcc5c8c58-p6xlr\" (UID: \"4f410137-3943-4e5f-890f-d7f54e165884\") " pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:45:48 crc kubenswrapper[4823]: I1206 06:45:48.885489 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:45:49 crc kubenswrapper[4823]: I1206 06:45:49.002775 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:45:50 crc kubenswrapper[4823]: I1206 06:45:50.228941 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" Dec 06 06:45:50 crc kubenswrapper[4823]: I1206 06:45:50.338934 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d97c8ddfc-xcbs2"] Dec 06 06:45:50 crc kubenswrapper[4823]: I1206 06:45:50.339180 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" podUID="9fc050cb-5e23-4a27-85f6-d95f40e2e237" containerName="dnsmasq-dns" containerID="cri-o://e672a3e6715195b8c61670f17bff31231abe9a20bda53737bee843abfca289ca" gracePeriod=10 Dec 06 06:45:50 crc kubenswrapper[4823]: I1206 06:45:50.817464 4823 generic.go:334] "Generic (PLEG): container finished" podID="03a2a291-9168-4869-9c60-f7c281733b5b" containerID="384b8c89c4ec153f147be679258ed92270d1089bbcb20e479a2687ddc87bacf8" exitCode=0 Dec 06 06:45:50 crc kubenswrapper[4823]: I1206 06:45:50.817619 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ktsq4" event={"ID":"03a2a291-9168-4869-9c60-f7c281733b5b","Type":"ContainerDied","Data":"384b8c89c4ec153f147be679258ed92270d1089bbcb20e479a2687ddc87bacf8"} Dec 06 06:45:50 crc kubenswrapper[4823]: I1206 06:45:50.824125 4823 generic.go:334] "Generic (PLEG): container finished" podID="9fc050cb-5e23-4a27-85f6-d95f40e2e237" containerID="e672a3e6715195b8c61670f17bff31231abe9a20bda53737bee843abfca289ca" exitCode=0 Dec 06 06:45:50 crc kubenswrapper[4823]: I1206 06:45:50.824186 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" event={"ID":"9fc050cb-5e23-4a27-85f6-d95f40e2e237","Type":"ContainerDied","Data":"e672a3e6715195b8c61670f17bff31231abe9a20bda53737bee843abfca289ca"} Dec 06 06:45:52 crc kubenswrapper[4823]: I1206 06:45:52.861129 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" podUID="9fc050cb-5e23-4a27-85f6-d95f40e2e237" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: connect: connection refused" Dec 06 06:46:02 crc kubenswrapper[4823]: I1206 06:46:02.861054 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" podUID="9fc050cb-5e23-4a27-85f6-d95f40e2e237" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: i/o timeout" Dec 06 06:46:04 crc kubenswrapper[4823]: E1206 06:46:04.470979 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-horizon:watcher_latest" Dec 06 06:46:04 crc kubenswrapper[4823]: E1206 06:46:04.471053 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-horizon:watcher_latest" Dec 06 06:46:04 crc kubenswrapper[4823]: E1206 06:46:04.471204 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.174:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n548h7bh8fh575h59dh65ch55ch5bdh56ch5dchd9h588h54ch66dhf4hffh595h5ffh58bh586h686h5b9h545h654h69h5b5h5c6hchdh79h584h5bbq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbzc9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-88df94445-p9p69_openstack(597c7a12-cbe9-4e65-b536-1bb49f1f36a2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:46:04 crc kubenswrapper[4823]: E1206 06:46:04.474859 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.174:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-88df94445-p9p69" podUID="597c7a12-cbe9-4e65-b536-1bb49f1f36a2" Dec 06 06:46:04 crc kubenswrapper[4823]: E1206 06:46:04.554652 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-horizon:watcher_latest" Dec 06 06:46:04 crc kubenswrapper[4823]: E1206 06:46:04.554801 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-horizon:watcher_latest" Dec 06 06:46:04 crc kubenswrapper[4823]: E1206 06:46:04.554959 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.174:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd7h694h5f5h5f9h687h56bh5b9h5cbh684h557h5fdh54ch66fh5f7h67bh88hf5h656h5dfh645h84hb9h687h6h56h5dfh5bch648hc7h54bh5dfhc9q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s5f6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-f69b87f6c-9xlvs_openstack(7ca59bae-8d1e-48ac-9fde-00b4482fd916): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:46:04 crc kubenswrapper[4823]: E1206 06:46:04.557344 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.174:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-f69b87f6c-9xlvs" podUID="7ca59bae-8d1e-48ac-9fde-00b4482fd916" Dec 06 06:46:05 crc kubenswrapper[4823]: E1206 06:46:05.504380 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Dec 06 06:46:05 crc kubenswrapper[4823]: E1206 06:46:05.504737 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Dec 06 06:46:05 crc kubenswrapper[4823]: E1206 06:46:05.504871 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.174:5001/podified-master-centos10/openstack-barbican-api:watcher_latest,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plqv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-x7fxv_openstack(3d04e917-34c8-4df1-bc89-69ca7b7753ac): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:46:05 crc kubenswrapper[4823]: E1206 06:46:05.506363 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-x7fxv" podUID="3d04e917-34c8-4df1-bc89-69ca7b7753ac" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.641499 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f69b87f6c-9xlvs" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.651359 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ktsq4" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.664774 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-88df94445-p9p69" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.703452 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-combined-ca-bundle\") pod \"03a2a291-9168-4869-9c60-f7c281733b5b\" (UID: \"03a2a291-9168-4869-9c60-f7c281733b5b\") " Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.703511 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-logs\") pod \"597c7a12-cbe9-4e65-b536-1bb49f1f36a2\" (UID: \"597c7a12-cbe9-4e65-b536-1bb49f1f36a2\") " Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.703540 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqnhl\" (UniqueName: \"kubernetes.io/projected/03a2a291-9168-4869-9c60-f7c281733b5b-kube-api-access-mqnhl\") pod \"03a2a291-9168-4869-9c60-f7c281733b5b\" (UID: \"03a2a291-9168-4869-9c60-f7c281733b5b\") " Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.703579 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5f6l\" (UniqueName: \"kubernetes.io/projected/7ca59bae-8d1e-48ac-9fde-00b4482fd916-kube-api-access-s5f6l\") pod \"7ca59bae-8d1e-48ac-9fde-00b4482fd916\" (UID: \"7ca59bae-8d1e-48ac-9fde-00b4482fd916\") " Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.703628 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-scripts\") pod \"03a2a291-9168-4869-9c60-f7c281733b5b\" (UID: \"03a2a291-9168-4869-9c60-f7c281733b5b\") " Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.703681 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ca59bae-8d1e-48ac-9fde-00b4482fd916-logs\") pod \"7ca59bae-8d1e-48ac-9fde-00b4482fd916\" (UID: \"7ca59bae-8d1e-48ac-9fde-00b4482fd916\") " Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.703792 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ca59bae-8d1e-48ac-9fde-00b4482fd916-config-data\") pod \"7ca59bae-8d1e-48ac-9fde-00b4482fd916\" (UID: \"7ca59bae-8d1e-48ac-9fde-00b4482fd916\") " Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.703821 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-horizon-secret-key\") pod \"597c7a12-cbe9-4e65-b536-1bb49f1f36a2\" (UID: \"597c7a12-cbe9-4e65-b536-1bb49f1f36a2\") " Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.703864 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-config-data\") pod \"597c7a12-cbe9-4e65-b536-1bb49f1f36a2\" (UID: \"597c7a12-cbe9-4e65-b536-1bb49f1f36a2\") " Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.703946 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-config-data\") pod \"03a2a291-9168-4869-9c60-f7c281733b5b\" (UID: \"03a2a291-9168-4869-9c60-f7c281733b5b\") " Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.703983 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-scripts\") pod \"597c7a12-cbe9-4e65-b536-1bb49f1f36a2\" (UID: \"597c7a12-cbe9-4e65-b536-1bb49f1f36a2\") " Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.704007 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ca59bae-8d1e-48ac-9fde-00b4482fd916-scripts\") pod \"7ca59bae-8d1e-48ac-9fde-00b4482fd916\" (UID: \"7ca59bae-8d1e-48ac-9fde-00b4482fd916\") " Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.704034 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-credential-keys\") pod \"03a2a291-9168-4869-9c60-f7c281733b5b\" (UID: \"03a2a291-9168-4869-9c60-f7c281733b5b\") " Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.704079 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7ca59bae-8d1e-48ac-9fde-00b4482fd916-horizon-secret-key\") pod \"7ca59bae-8d1e-48ac-9fde-00b4482fd916\" (UID: \"7ca59bae-8d1e-48ac-9fde-00b4482fd916\") " Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.704099 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-fernet-keys\") pod \"03a2a291-9168-4869-9c60-f7c281733b5b\" (UID: \"03a2a291-9168-4869-9c60-f7c281733b5b\") " Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.704115 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbzc9\" (UniqueName: \"kubernetes.io/projected/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-kube-api-access-pbzc9\") pod \"597c7a12-cbe9-4e65-b536-1bb49f1f36a2\" (UID: \"597c7a12-cbe9-4e65-b536-1bb49f1f36a2\") " Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.704604 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ca59bae-8d1e-48ac-9fde-00b4482fd916-config-data" (OuterVolumeSpecName: "config-data") pod "7ca59bae-8d1e-48ac-9fde-00b4482fd916" (UID: "7ca59bae-8d1e-48ac-9fde-00b4482fd916"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.704636 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-config-data" (OuterVolumeSpecName: "config-data") pod "597c7a12-cbe9-4e65-b536-1bb49f1f36a2" (UID: "597c7a12-cbe9-4e65-b536-1bb49f1f36a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.705258 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ca59bae-8d1e-48ac-9fde-00b4482fd916-scripts" (OuterVolumeSpecName: "scripts") pod "7ca59bae-8d1e-48ac-9fde-00b4482fd916" (UID: "7ca59bae-8d1e-48ac-9fde-00b4482fd916"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.705488 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-logs" (OuterVolumeSpecName: "logs") pod "597c7a12-cbe9-4e65-b536-1bb49f1f36a2" (UID: "597c7a12-cbe9-4e65-b536-1bb49f1f36a2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.705589 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ca59bae-8d1e-48ac-9fde-00b4482fd916-logs" (OuterVolumeSpecName: "logs") pod "7ca59bae-8d1e-48ac-9fde-00b4482fd916" (UID: "7ca59bae-8d1e-48ac-9fde-00b4482fd916"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.705901 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-scripts" (OuterVolumeSpecName: "scripts") pod "597c7a12-cbe9-4e65-b536-1bb49f1f36a2" (UID: "597c7a12-cbe9-4e65-b536-1bb49f1f36a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.713613 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "597c7a12-cbe9-4e65-b536-1bb49f1f36a2" (UID: "597c7a12-cbe9-4e65-b536-1bb49f1f36a2"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.713737 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-scripts" (OuterVolumeSpecName: "scripts") pod "03a2a291-9168-4869-9c60-f7c281733b5b" (UID: "03a2a291-9168-4869-9c60-f7c281733b5b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.718304 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "03a2a291-9168-4869-9c60-f7c281733b5b" (UID: "03a2a291-9168-4869-9c60-f7c281733b5b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.719851 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "03a2a291-9168-4869-9c60-f7c281733b5b" (UID: "03a2a291-9168-4869-9c60-f7c281733b5b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.721244 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03a2a291-9168-4869-9c60-f7c281733b5b-kube-api-access-mqnhl" (OuterVolumeSpecName: "kube-api-access-mqnhl") pod "03a2a291-9168-4869-9c60-f7c281733b5b" (UID: "03a2a291-9168-4869-9c60-f7c281733b5b"). InnerVolumeSpecName "kube-api-access-mqnhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.726361 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ca59bae-8d1e-48ac-9fde-00b4482fd916-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7ca59bae-8d1e-48ac-9fde-00b4482fd916" (UID: "7ca59bae-8d1e-48ac-9fde-00b4482fd916"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.726912 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ca59bae-8d1e-48ac-9fde-00b4482fd916-kube-api-access-s5f6l" (OuterVolumeSpecName: "kube-api-access-s5f6l") pod "7ca59bae-8d1e-48ac-9fde-00b4482fd916" (UID: "7ca59bae-8d1e-48ac-9fde-00b4482fd916"). InnerVolumeSpecName "kube-api-access-s5f6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.740294 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-kube-api-access-pbzc9" (OuterVolumeSpecName: "kube-api-access-pbzc9") pod "597c7a12-cbe9-4e65-b536-1bb49f1f36a2" (UID: "597c7a12-cbe9-4e65-b536-1bb49f1f36a2"). InnerVolumeSpecName "kube-api-access-pbzc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.752945 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-config-data" (OuterVolumeSpecName: "config-data") pod "03a2a291-9168-4869-9c60-f7c281733b5b" (UID: "03a2a291-9168-4869-9c60-f7c281733b5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.753081 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03a2a291-9168-4869-9c60-f7c281733b5b" (UID: "03a2a291-9168-4869-9c60-f7c281733b5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.806686 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ca59bae-8d1e-48ac-9fde-00b4482fd916-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.806730 4823 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.806745 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.806757 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.806769 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.806783 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ca59bae-8d1e-48ac-9fde-00b4482fd916-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.806793 4823 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.806916 4823 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.806933 4823 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7ca59bae-8d1e-48ac-9fde-00b4482fd916-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.806945 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbzc9\" (UniqueName: \"kubernetes.io/projected/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-kube-api-access-pbzc9\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.806959 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.806971 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/597c7a12-cbe9-4e65-b536-1bb49f1f36a2-logs\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.806981 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqnhl\" (UniqueName: \"kubernetes.io/projected/03a2a291-9168-4869-9c60-f7c281733b5b-kube-api-access-mqnhl\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.806992 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5f6l\" (UniqueName: \"kubernetes.io/projected/7ca59bae-8d1e-48ac-9fde-00b4482fd916-kube-api-access-s5f6l\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.807002 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03a2a291-9168-4869-9c60-f7c281733b5b-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.807014 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ca59bae-8d1e-48ac-9fde-00b4482fd916-logs\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.978568 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-88df94445-p9p69" event={"ID":"597c7a12-cbe9-4e65-b536-1bb49f1f36a2","Type":"ContainerDied","Data":"61be5ed1a2867f6d5acff740156bdc1949ae6253b425a97363413e32015f148f"} Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.978982 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-88df94445-p9p69" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.995603 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f69b87f6c-9xlvs" Dec 06 06:46:05 crc kubenswrapper[4823]: I1206 06:46:05.995739 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f69b87f6c-9xlvs" event={"ID":"7ca59bae-8d1e-48ac-9fde-00b4482fd916","Type":"ContainerDied","Data":"29db22d5d84618ca8842f4e24ca82bf550459885c9b3b77bdfaefb2bd6b5543e"} Dec 06 06:46:06 crc kubenswrapper[4823]: I1206 06:46:05.998110 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ktsq4" Dec 06 06:46:06 crc kubenswrapper[4823]: I1206 06:46:06.002922 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ktsq4" event={"ID":"03a2a291-9168-4869-9c60-f7c281733b5b","Type":"ContainerDied","Data":"8e8e059aef6a1f364a8623d4ad4b30d6557e191bbc5fa276bab7404dd76cc3b6"} Dec 06 06:46:06 crc kubenswrapper[4823]: I1206 06:46:06.002977 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e8e059aef6a1f364a8623d4ad4b30d6557e191bbc5fa276bab7404dd76cc3b6" Dec 06 06:46:06 crc kubenswrapper[4823]: E1206 06:46:06.005976 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.174:5001/podified-master-centos10/openstack-barbican-api:watcher_latest\\\"\"" pod="openstack/barbican-db-sync-x7fxv" podUID="3d04e917-34c8-4df1-bc89-69ca7b7753ac" Dec 06 06:46:06 crc kubenswrapper[4823]: I1206 06:46:06.065970 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-88df94445-p9p69"] Dec 06 06:46:06 crc kubenswrapper[4823]: I1206 06:46:06.077371 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-88df94445-p9p69"] Dec 06 06:46:06 crc kubenswrapper[4823]: I1206 06:46:06.101187 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f69b87f6c-9xlvs"] Dec 06 06:46:06 crc kubenswrapper[4823]: I1206 06:46:06.110857 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-f69b87f6c-9xlvs"] Dec 06 06:46:06 crc kubenswrapper[4823]: I1206 06:46:06.765337 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-ktsq4"] Dec 06 06:46:06 crc kubenswrapper[4823]: I1206 06:46:06.774993 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-ktsq4"] Dec 06 06:46:06 crc kubenswrapper[4823]: I1206 06:46:06.865730 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vbx6z"] Dec 06 06:46:06 crc kubenswrapper[4823]: E1206 06:46:06.866353 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a2a291-9168-4869-9c60-f7c281733b5b" containerName="keystone-bootstrap" Dec 06 06:46:06 crc kubenswrapper[4823]: I1206 06:46:06.866378 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a2a291-9168-4869-9c60-f7c281733b5b" containerName="keystone-bootstrap" Dec 06 06:46:06 crc kubenswrapper[4823]: I1206 06:46:06.866636 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a2a291-9168-4869-9c60-f7c281733b5b" containerName="keystone-bootstrap" Dec 06 06:46:06 crc kubenswrapper[4823]: I1206 06:46:06.867514 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vbx6z" Dec 06 06:46:06 crc kubenswrapper[4823]: I1206 06:46:06.869809 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 06 06:46:06 crc kubenswrapper[4823]: I1206 06:46:06.870227 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 06 06:46:06 crc kubenswrapper[4823]: I1206 06:46:06.870777 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 06 06:46:06 crc kubenswrapper[4823]: I1206 06:46:06.870877 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5tv2h" Dec 06 06:46:06 crc kubenswrapper[4823]: I1206 06:46:06.870784 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 06 06:46:06 crc kubenswrapper[4823]: I1206 06:46:06.877905 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vbx6z"] Dec 06 06:46:06 crc kubenswrapper[4823]: I1206 06:46:06.928423 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbqrl\" (UniqueName: \"kubernetes.io/projected/d1bb321f-d664-4c3d-a312-6a57323199c9-kube-api-access-pbqrl\") pod \"keystone-bootstrap-vbx6z\" (UID: \"d1bb321f-d664-4c3d-a312-6a57323199c9\") " pod="openstack/keystone-bootstrap-vbx6z" Dec 06 06:46:06 crc kubenswrapper[4823]: I1206 06:46:06.928513 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-combined-ca-bundle\") pod \"keystone-bootstrap-vbx6z\" (UID: \"d1bb321f-d664-4c3d-a312-6a57323199c9\") " pod="openstack/keystone-bootstrap-vbx6z" Dec 06 06:46:06 crc kubenswrapper[4823]: I1206 06:46:06.929491 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-config-data\") pod \"keystone-bootstrap-vbx6z\" (UID: \"d1bb321f-d664-4c3d-a312-6a57323199c9\") " pod="openstack/keystone-bootstrap-vbx6z" Dec 06 06:46:06 crc kubenswrapper[4823]: I1206 06:46:06.929558 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-fernet-keys\") pod \"keystone-bootstrap-vbx6z\" (UID: \"d1bb321f-d664-4c3d-a312-6a57323199c9\") " pod="openstack/keystone-bootstrap-vbx6z" Dec 06 06:46:06 crc kubenswrapper[4823]: I1206 06:46:06.929754 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-credential-keys\") pod \"keystone-bootstrap-vbx6z\" (UID: \"d1bb321f-d664-4c3d-a312-6a57323199c9\") " pod="openstack/keystone-bootstrap-vbx6z" Dec 06 06:46:06 crc kubenswrapper[4823]: I1206 06:46:06.929965 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-scripts\") pod \"keystone-bootstrap-vbx6z\" (UID: \"d1bb321f-d664-4c3d-a312-6a57323199c9\") " pod="openstack/keystone-bootstrap-vbx6z" Dec 06 06:46:07 crc kubenswrapper[4823]: I1206 06:46:07.032218 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-config-data\") pod \"keystone-bootstrap-vbx6z\" (UID: \"d1bb321f-d664-4c3d-a312-6a57323199c9\") " pod="openstack/keystone-bootstrap-vbx6z" Dec 06 06:46:07 crc kubenswrapper[4823]: I1206 06:46:07.032264 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-fernet-keys\") pod \"keystone-bootstrap-vbx6z\" (UID: \"d1bb321f-d664-4c3d-a312-6a57323199c9\") " pod="openstack/keystone-bootstrap-vbx6z" Dec 06 06:46:07 crc kubenswrapper[4823]: I1206 06:46:07.032301 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-credential-keys\") pod \"keystone-bootstrap-vbx6z\" (UID: \"d1bb321f-d664-4c3d-a312-6a57323199c9\") " pod="openstack/keystone-bootstrap-vbx6z" Dec 06 06:46:07 crc kubenswrapper[4823]: I1206 06:46:07.032366 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-scripts\") pod \"keystone-bootstrap-vbx6z\" (UID: \"d1bb321f-d664-4c3d-a312-6a57323199c9\") " pod="openstack/keystone-bootstrap-vbx6z" Dec 06 06:46:07 crc kubenswrapper[4823]: I1206 06:46:07.032410 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbqrl\" (UniqueName: \"kubernetes.io/projected/d1bb321f-d664-4c3d-a312-6a57323199c9-kube-api-access-pbqrl\") pod \"keystone-bootstrap-vbx6z\" (UID: \"d1bb321f-d664-4c3d-a312-6a57323199c9\") " pod="openstack/keystone-bootstrap-vbx6z" Dec 06 06:46:07 crc kubenswrapper[4823]: I1206 06:46:07.032441 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-combined-ca-bundle\") pod \"keystone-bootstrap-vbx6z\" (UID: \"d1bb321f-d664-4c3d-a312-6a57323199c9\") " pod="openstack/keystone-bootstrap-vbx6z" Dec 06 06:46:07 crc kubenswrapper[4823]: I1206 06:46:07.037605 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-combined-ca-bundle\") pod \"keystone-bootstrap-vbx6z\" (UID: \"d1bb321f-d664-4c3d-a312-6a57323199c9\") " pod="openstack/keystone-bootstrap-vbx6z" Dec 06 06:46:07 crc kubenswrapper[4823]: I1206 06:46:07.037875 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-scripts\") pod \"keystone-bootstrap-vbx6z\" (UID: \"d1bb321f-d664-4c3d-a312-6a57323199c9\") " pod="openstack/keystone-bootstrap-vbx6z" Dec 06 06:46:07 crc kubenswrapper[4823]: I1206 06:46:07.043107 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-credential-keys\") pod \"keystone-bootstrap-vbx6z\" (UID: \"d1bb321f-d664-4c3d-a312-6a57323199c9\") " pod="openstack/keystone-bootstrap-vbx6z" Dec 06 06:46:07 crc kubenswrapper[4823]: I1206 06:46:07.049569 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-fernet-keys\") pod \"keystone-bootstrap-vbx6z\" (UID: \"d1bb321f-d664-4c3d-a312-6a57323199c9\") " pod="openstack/keystone-bootstrap-vbx6z" Dec 06 06:46:07 crc kubenswrapper[4823]: I1206 06:46:07.049728 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-config-data\") pod \"keystone-bootstrap-vbx6z\" (UID: \"d1bb321f-d664-4c3d-a312-6a57323199c9\") " pod="openstack/keystone-bootstrap-vbx6z" Dec 06 06:46:07 crc kubenswrapper[4823]: I1206 06:46:07.054131 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbqrl\" (UniqueName: \"kubernetes.io/projected/d1bb321f-d664-4c3d-a312-6a57323199c9-kube-api-access-pbqrl\") pod \"keystone-bootstrap-vbx6z\" (UID: \"d1bb321f-d664-4c3d-a312-6a57323199c9\") " pod="openstack/keystone-bootstrap-vbx6z" Dec 06 06:46:07 crc kubenswrapper[4823]: I1206 06:46:07.157481 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03a2a291-9168-4869-9c60-f7c281733b5b" path="/var/lib/kubelet/pods/03a2a291-9168-4869-9c60-f7c281733b5b/volumes" Dec 06 06:46:07 crc kubenswrapper[4823]: I1206 06:46:07.158558 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="597c7a12-cbe9-4e65-b536-1bb49f1f36a2" path="/var/lib/kubelet/pods/597c7a12-cbe9-4e65-b536-1bb49f1f36a2/volumes" Dec 06 06:46:07 crc kubenswrapper[4823]: I1206 06:46:07.159469 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ca59bae-8d1e-48ac-9fde-00b4482fd916" path="/var/lib/kubelet/pods/7ca59bae-8d1e-48ac-9fde-00b4482fd916/volumes" Dec 06 06:46:07 crc kubenswrapper[4823]: I1206 06:46:07.208809 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vbx6z" Dec 06 06:46:07 crc kubenswrapper[4823]: I1206 06:46:07.862034 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" podUID="9fc050cb-5e23-4a27-85f6-d95f40e2e237" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: i/o timeout" Dec 06 06:46:07 crc kubenswrapper[4823]: I1206 06:46:07.862620 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" Dec 06 06:46:12 crc kubenswrapper[4823]: I1206 06:46:12.864397 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" podUID="9fc050cb-5e23-4a27-85f6-d95f40e2e237" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: i/o timeout" Dec 06 06:46:17 crc kubenswrapper[4823]: I1206 06:46:17.150854 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 06:46:17 crc kubenswrapper[4823]: E1206 06:46:17.151601 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Dec 06 06:46:17 crc kubenswrapper[4823]: E1206 06:46:17.151633 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Dec 06 06:46:17 crc kubenswrapper[4823]: E1206 06:46:17.151831 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:38.102.83.174:5001/podified-master-centos10/openstack-glance-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b6qtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-9fd7r_openstack(f5301842-d5df-4df6-8699-56f86789df64): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:46:17 crc kubenswrapper[4823]: E1206 06:46:17.155112 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-9fd7r" podUID="f5301842-d5df-4df6-8699-56f86789df64" Dec 06 06:46:17 crc kubenswrapper[4823]: I1206 06:46:17.865363 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" podUID="9fc050cb-5e23-4a27-85f6-d95f40e2e237" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: i/o timeout" Dec 06 06:46:18 crc kubenswrapper[4823]: E1206 06:46:18.125690 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.174:5001/podified-master-centos10/openstack-glance-api:watcher_latest\\\"\"" pod="openstack/glance-db-sync-9fd7r" podUID="f5301842-d5df-4df6-8699-56f86789df64" Dec 06 06:46:22 crc kubenswrapper[4823]: I1206 06:46:22.866226 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" podUID="9fc050cb-5e23-4a27-85f6-d95f40e2e237" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: i/o timeout" Dec 06 06:46:27 crc kubenswrapper[4823]: I1206 06:46:27.867704 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" podUID="9fc050cb-5e23-4a27-85f6-d95f40e2e237" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: i/o timeout" Dec 06 06:46:29 crc kubenswrapper[4823]: I1206 06:46:29.577219 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" Dec 06 06:46:29 crc kubenswrapper[4823]: I1206 06:46:29.732455 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-config\") pod \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\" (UID: \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\") " Dec 06 06:46:29 crc kubenswrapper[4823]: I1206 06:46:29.732543 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-dns-svc\") pod \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\" (UID: \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\") " Dec 06 06:46:29 crc kubenswrapper[4823]: I1206 06:46:29.732738 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-ovsdbserver-nb\") pod \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\" (UID: \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\") " Dec 06 06:46:29 crc kubenswrapper[4823]: I1206 06:46:29.732803 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-ovsdbserver-sb\") pod \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\" (UID: \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\") " Dec 06 06:46:29 crc kubenswrapper[4823]: I1206 06:46:29.732872 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-dns-swift-storage-0\") pod \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\" (UID: \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\") " Dec 06 06:46:29 crc kubenswrapper[4823]: I1206 06:46:29.732988 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkdsf\" (UniqueName: \"kubernetes.io/projected/9fc050cb-5e23-4a27-85f6-d95f40e2e237-kube-api-access-zkdsf\") pod \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\" (UID: \"9fc050cb-5e23-4a27-85f6-d95f40e2e237\") " Dec 06 06:46:29 crc kubenswrapper[4823]: E1206 06:46:29.733888 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Dec 06 06:46:29 crc kubenswrapper[4823]: E1206 06:46:29.733971 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Dec 06 06:46:29 crc kubenswrapper[4823]: E1206 06:46:29.734188 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:38.102.83.174:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5dfh644h96h7dh7dh9dh6h79hd6h64ch645h576h574h586h64ch5bbh68bh5fh5h58bh5dbhdbhd5h5ddh5b5h86h8bhd4h6h5ddh7dh56cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gpw9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f7be6d4d-b41b-462c-ac84-16b84a45b63c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:46:29 crc kubenswrapper[4823]: I1206 06:46:29.756635 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fc050cb-5e23-4a27-85f6-d95f40e2e237-kube-api-access-zkdsf" (OuterVolumeSpecName: "kube-api-access-zkdsf") pod "9fc050cb-5e23-4a27-85f6-d95f40e2e237" (UID: "9fc050cb-5e23-4a27-85f6-d95f40e2e237"). InnerVolumeSpecName "kube-api-access-zkdsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:46:29 crc kubenswrapper[4823]: I1206 06:46:29.792996 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9fc050cb-5e23-4a27-85f6-d95f40e2e237" (UID: "9fc050cb-5e23-4a27-85f6-d95f40e2e237"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:46:29 crc kubenswrapper[4823]: I1206 06:46:29.794592 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9fc050cb-5e23-4a27-85f6-d95f40e2e237" (UID: "9fc050cb-5e23-4a27-85f6-d95f40e2e237"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:46:29 crc kubenswrapper[4823]: I1206 06:46:29.802237 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9fc050cb-5e23-4a27-85f6-d95f40e2e237" (UID: "9fc050cb-5e23-4a27-85f6-d95f40e2e237"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:46:29 crc kubenswrapper[4823]: I1206 06:46:29.815744 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-config" (OuterVolumeSpecName: "config") pod "9fc050cb-5e23-4a27-85f6-d95f40e2e237" (UID: "9fc050cb-5e23-4a27-85f6-d95f40e2e237"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:46:29 crc kubenswrapper[4823]: I1206 06:46:29.826751 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9fc050cb-5e23-4a27-85f6-d95f40e2e237" (UID: "9fc050cb-5e23-4a27-85f6-d95f40e2e237"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:46:29 crc kubenswrapper[4823]: I1206 06:46:29.838195 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkdsf\" (UniqueName: \"kubernetes.io/projected/9fc050cb-5e23-4a27-85f6-d95f40e2e237-kube-api-access-zkdsf\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:29 crc kubenswrapper[4823]: I1206 06:46:29.838242 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:29 crc kubenswrapper[4823]: I1206 06:46:29.838285 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:29 crc kubenswrapper[4823]: I1206 06:46:29.838297 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:29 crc kubenswrapper[4823]: I1206 06:46:29.838306 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:29 crc kubenswrapper[4823]: I1206 06:46:29.838315 4823 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fc050cb-5e23-4a27-85f6-d95f40e2e237-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:29 crc kubenswrapper[4823]: I1206 06:46:29.980848 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cdc5bf4b4-qft5r"] Dec 06 06:46:30 crc kubenswrapper[4823]: I1206 06:46:30.281417 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" event={"ID":"9fc050cb-5e23-4a27-85f6-d95f40e2e237","Type":"ContainerDied","Data":"cc43e1281be8b2e515ba7ca37fb7021afc51790d28a7b05a3beb446da78b36de"} Dec 06 06:46:30 crc kubenswrapper[4823]: I1206 06:46:30.282013 4823 scope.go:117] "RemoveContainer" containerID="e672a3e6715195b8c61670f17bff31231abe9a20bda53737bee843abfca289ca" Dec 06 06:46:30 crc kubenswrapper[4823]: I1206 06:46:30.281647 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" Dec 06 06:46:30 crc kubenswrapper[4823]: I1206 06:46:30.322255 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d97c8ddfc-xcbs2"] Dec 06 06:46:30 crc kubenswrapper[4823]: I1206 06:46:30.332204 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d97c8ddfc-xcbs2"] Dec 06 06:46:31 crc kubenswrapper[4823]: E1206 06:46:31.114423 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-horizon:watcher_latest" Dec 06 06:46:31 crc kubenswrapper[4823]: E1206 06:46:31.114512 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-horizon:watcher_latest" Dec 06 06:46:31 crc kubenswrapper[4823]: E1206 06:46:31.114843 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.174:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n558h5bh667h64dh546h647h9dh68fh665h5d7h684h79h68bh5d4h95hdbh5f8h64bh597h54bh56hdch64dh659h56ch664h696h7hd7h8fh7bhcdq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rkqzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6f78c4d669-7mcnd_openstack(786cb861-417a-49bc-a619-afb242b5d8c2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:46:31 crc kubenswrapper[4823]: E1206 06:46:31.117562 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.174:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-6f78c4d669-7mcnd" podUID="786cb861-417a-49bc-a619-afb242b5d8c2" Dec 06 06:46:31 crc kubenswrapper[4823]: I1206 06:46:31.156082 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fc050cb-5e23-4a27-85f6-d95f40e2e237" path="/var/lib/kubelet/pods/9fc050cb-5e23-4a27-85f6-d95f40e2e237/volumes" Dec 06 06:46:32 crc kubenswrapper[4823]: I1206 06:46:32.869813 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5d97c8ddfc-xcbs2" podUID="9fc050cb-5e23-4a27-85f6-d95f40e2e237" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: i/o timeout" Dec 06 06:46:32 crc kubenswrapper[4823]: W1206 06:46:32.968217 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bcc21a4_6b09_4804_86d5_85cc7f0267e7.slice/crio-d94b563647fc919fb468045049d017a24c5e940d9629b6251185df470f430a14 WatchSource:0}: Error finding container d94b563647fc919fb468045049d017a24c5e940d9629b6251185df470f430a14: Status 404 returned error can't find the container with id d94b563647fc919fb468045049d017a24c5e940d9629b6251185df470f430a14 Dec 06 06:46:33 crc kubenswrapper[4823]: I1206 06:46:33.020508 4823 scope.go:117] "RemoveContainer" containerID="e8aead0ac1341b48036354257929d3e8549a7aec25d41df1a26718084e8b5420" Dec 06 06:46:33 crc kubenswrapper[4823]: I1206 06:46:33.115557 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f78c4d669-7mcnd" Dec 06 06:46:33 crc kubenswrapper[4823]: I1206 06:46:33.241893 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/786cb861-417a-49bc-a619-afb242b5d8c2-logs\") pod \"786cb861-417a-49bc-a619-afb242b5d8c2\" (UID: \"786cb861-417a-49bc-a619-afb242b5d8c2\") " Dec 06 06:46:33 crc kubenswrapper[4823]: I1206 06:46:33.242450 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/786cb861-417a-49bc-a619-afb242b5d8c2-config-data\") pod \"786cb861-417a-49bc-a619-afb242b5d8c2\" (UID: \"786cb861-417a-49bc-a619-afb242b5d8c2\") " Dec 06 06:46:33 crc kubenswrapper[4823]: I1206 06:46:33.242521 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkqzk\" (UniqueName: \"kubernetes.io/projected/786cb861-417a-49bc-a619-afb242b5d8c2-kube-api-access-rkqzk\") pod \"786cb861-417a-49bc-a619-afb242b5d8c2\" (UID: \"786cb861-417a-49bc-a619-afb242b5d8c2\") " Dec 06 06:46:33 crc kubenswrapper[4823]: I1206 06:46:33.242559 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/786cb861-417a-49bc-a619-afb242b5d8c2-scripts\") pod \"786cb861-417a-49bc-a619-afb242b5d8c2\" (UID: \"786cb861-417a-49bc-a619-afb242b5d8c2\") " Dec 06 06:46:33 crc kubenswrapper[4823]: I1206 06:46:33.242576 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/786cb861-417a-49bc-a619-afb242b5d8c2-logs" (OuterVolumeSpecName: "logs") pod "786cb861-417a-49bc-a619-afb242b5d8c2" (UID: "786cb861-417a-49bc-a619-afb242b5d8c2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:46:33 crc kubenswrapper[4823]: I1206 06:46:33.242695 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/786cb861-417a-49bc-a619-afb242b5d8c2-horizon-secret-key\") pod \"786cb861-417a-49bc-a619-afb242b5d8c2\" (UID: \"786cb861-417a-49bc-a619-afb242b5d8c2\") " Dec 06 06:46:33 crc kubenswrapper[4823]: I1206 06:46:33.243233 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/786cb861-417a-49bc-a619-afb242b5d8c2-logs\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:33 crc kubenswrapper[4823]: I1206 06:46:33.243885 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/786cb861-417a-49bc-a619-afb242b5d8c2-config-data" (OuterVolumeSpecName: "config-data") pod "786cb861-417a-49bc-a619-afb242b5d8c2" (UID: "786cb861-417a-49bc-a619-afb242b5d8c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:46:33 crc kubenswrapper[4823]: I1206 06:46:33.245216 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/786cb861-417a-49bc-a619-afb242b5d8c2-scripts" (OuterVolumeSpecName: "scripts") pod "786cb861-417a-49bc-a619-afb242b5d8c2" (UID: "786cb861-417a-49bc-a619-afb242b5d8c2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:46:33 crc kubenswrapper[4823]: I1206 06:46:33.254925 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/786cb861-417a-49bc-a619-afb242b5d8c2-kube-api-access-rkqzk" (OuterVolumeSpecName: "kube-api-access-rkqzk") pod "786cb861-417a-49bc-a619-afb242b5d8c2" (UID: "786cb861-417a-49bc-a619-afb242b5d8c2"). InnerVolumeSpecName "kube-api-access-rkqzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:46:33 crc kubenswrapper[4823]: I1206 06:46:33.258536 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/786cb861-417a-49bc-a619-afb242b5d8c2-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "786cb861-417a-49bc-a619-afb242b5d8c2" (UID: "786cb861-417a-49bc-a619-afb242b5d8c2"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:46:33 crc kubenswrapper[4823]: I1206 06:46:33.323167 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cdc5bf4b4-qft5r" event={"ID":"2bcc21a4-6b09-4804-86d5-85cc7f0267e7","Type":"ContainerStarted","Data":"d94b563647fc919fb468045049d017a24c5e940d9629b6251185df470f430a14"} Dec 06 06:46:33 crc kubenswrapper[4823]: I1206 06:46:33.326585 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f78c4d669-7mcnd" event={"ID":"786cb861-417a-49bc-a619-afb242b5d8c2","Type":"ContainerDied","Data":"f3c360759f1b34789e9891650c178925ffebdc33758e32e793e7c11f44f899d4"} Dec 06 06:46:33 crc kubenswrapper[4823]: I1206 06:46:33.326606 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f78c4d669-7mcnd" Dec 06 06:46:33 crc kubenswrapper[4823]: I1206 06:46:33.356379 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/786cb861-417a-49bc-a619-afb242b5d8c2-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:33 crc kubenswrapper[4823]: I1206 06:46:33.356420 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkqzk\" (UniqueName: \"kubernetes.io/projected/786cb861-417a-49bc-a619-afb242b5d8c2-kube-api-access-rkqzk\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:33 crc kubenswrapper[4823]: I1206 06:46:33.356432 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/786cb861-417a-49bc-a619-afb242b5d8c2-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:33 crc kubenswrapper[4823]: I1206 06:46:33.356445 4823 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/786cb861-417a-49bc-a619-afb242b5d8c2-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:33 crc kubenswrapper[4823]: E1206 06:46:33.379155 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Dec 06 06:46:33 crc kubenswrapper[4823]: E1206 06:46:33.379258 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Dec 06 06:46:33 crc kubenswrapper[4823]: E1206 06:46:33.379389 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.174:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-84hxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-kls2x_openstack(157d2d95-42a3-4f80-8c1d-b8c27bee49be): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:46:33 crc kubenswrapper[4823]: E1206 06:46:33.380628 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-kls2x" podUID="157d2d95-42a3-4f80-8c1d-b8c27bee49be" Dec 06 06:46:33 crc kubenswrapper[4823]: I1206 06:46:33.422055 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f78c4d669-7mcnd"] Dec 06 06:46:33 crc kubenswrapper[4823]: I1206 06:46:33.435143 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6f78c4d669-7mcnd"] Dec 06 06:46:33 crc kubenswrapper[4823]: I1206 06:46:33.574481 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vbx6z"] Dec 06 06:46:33 crc kubenswrapper[4823]: W1206 06:46:33.576593 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1bb321f_d664_4c3d_a312_6a57323199c9.slice/crio-e21af1ab07fc69bff372f479e29190f4bec4c0e19d4ae5085ea21865041569f1 WatchSource:0}: Error finding container e21af1ab07fc69bff372f479e29190f4bec4c0e19d4ae5085ea21865041569f1: Status 404 returned error can't find the container with id e21af1ab07fc69bff372f479e29190f4bec4c0e19d4ae5085ea21865041569f1 Dec 06 06:46:33 crc kubenswrapper[4823]: I1206 06:46:33.626095 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5dcc5c8c58-p6xlr"] Dec 06 06:46:33 crc kubenswrapper[4823]: W1206 06:46:33.626281 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f410137_3943_4e5f_890f_d7f54e165884.slice/crio-e8c1e735bc69d881268d0ab812075fc6f20d30e11df39ad65ea4354f4c498c5e WatchSource:0}: Error finding container e8c1e735bc69d881268d0ab812075fc6f20d30e11df39ad65ea4354f4c498c5e: Status 404 returned error can't find the container with id e8c1e735bc69d881268d0ab812075fc6f20d30e11df39ad65ea4354f4c498c5e Dec 06 06:46:34 crc kubenswrapper[4823]: I1206 06:46:34.351438 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vbx6z" event={"ID":"d1bb321f-d664-4c3d-a312-6a57323199c9","Type":"ContainerStarted","Data":"e21af1ab07fc69bff372f479e29190f4bec4c0e19d4ae5085ea21865041569f1"} Dec 06 06:46:34 crc kubenswrapper[4823]: I1206 06:46:34.353674 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-h7n5l" event={"ID":"2955103b-2cae-4fe0-8ffe-bbca608cad77","Type":"ContainerStarted","Data":"5ea99d5abc9e427ab63f79f7f81597a8a0063395441f32474ab0dceb630285d5"} Dec 06 06:46:34 crc kubenswrapper[4823]: I1206 06:46:34.354952 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dcc5c8c58-p6xlr" event={"ID":"4f410137-3943-4e5f-890f-d7f54e165884","Type":"ContainerStarted","Data":"e8c1e735bc69d881268d0ab812075fc6f20d30e11df39ad65ea4354f4c498c5e"} Dec 06 06:46:34 crc kubenswrapper[4823]: E1206 06:46:34.356481 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.174:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-kls2x" podUID="157d2d95-42a3-4f80-8c1d-b8c27bee49be" Dec 06 06:46:35 crc kubenswrapper[4823]: I1206 06:46:35.153216 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="786cb861-417a-49bc-a619-afb242b5d8c2" path="/var/lib/kubelet/pods/786cb861-417a-49bc-a619-afb242b5d8c2/volumes" Dec 06 06:46:39 crc kubenswrapper[4823]: I1206 06:46:39.406542 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-26bwc" event={"ID":"aaeee530-df36-4fc7-96d5-b93755e8c4fe","Type":"ContainerStarted","Data":"8a2f5be1440fbacc044e3c6bce78b35114f3e3e6962e6cecb11f639db268e959"} Dec 06 06:46:39 crc kubenswrapper[4823]: I1206 06:46:39.409610 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vbx6z" event={"ID":"d1bb321f-d664-4c3d-a312-6a57323199c9","Type":"ContainerStarted","Data":"7b4d311230b0ae04ed859ba4ec9ebfe03e3e8abd52f98c030f298e10c433e534"} Dec 06 06:46:39 crc kubenswrapper[4823]: I1206 06:46:39.437142 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-h7n5l" podStartSLOduration=9.679003037 podStartE2EDuration="1m0.437102101s" podCreationTimestamp="2025-12-06 06:45:39 +0000 UTC" firstStartedPulling="2025-12-06 06:45:42.210338362 +0000 UTC m=+1243.496090332" lastFinishedPulling="2025-12-06 06:46:32.968437446 +0000 UTC m=+1294.254189396" observedRunningTime="2025-12-06 06:46:39.424202018 +0000 UTC m=+1300.709953978" watchObservedRunningTime="2025-12-06 06:46:39.437102101 +0000 UTC m=+1300.722854061" Dec 06 06:46:40 crc kubenswrapper[4823]: I1206 06:46:40.457223 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vbx6z" podStartSLOduration=34.457203506 podStartE2EDuration="34.457203506s" podCreationTimestamp="2025-12-06 06:46:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:46:40.441118481 +0000 UTC m=+1301.726870441" watchObservedRunningTime="2025-12-06 06:46:40.457203506 +0000 UTC m=+1301.742955466" Dec 06 06:46:40 crc kubenswrapper[4823]: I1206 06:46:40.461991 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-26bwc" podStartSLOduration=36.846839337 podStartE2EDuration="1m25.461968994s" podCreationTimestamp="2025-12-06 06:45:15 +0000 UTC" firstStartedPulling="2025-12-06 06:45:16.87401455 +0000 UTC m=+1218.159766510" lastFinishedPulling="2025-12-06 06:46:05.489144207 +0000 UTC m=+1266.774896167" observedRunningTime="2025-12-06 06:46:40.455362203 +0000 UTC m=+1301.741114163" watchObservedRunningTime="2025-12-06 06:46:40.461968994 +0000 UTC m=+1301.747720954" Dec 06 06:47:05 crc kubenswrapper[4823]: E1206 06:47:05.451473 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-ceilometer-notification:watcher_latest" Dec 06 06:47:05 crc kubenswrapper[4823]: E1206 06:47:05.452240 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-ceilometer-notification:watcher_latest" Dec 06 06:47:05 crc kubenswrapper[4823]: E1206 06:47:05.452607 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-notification-agent,Image:38.102.83.174:5001/podified-master-centos10/openstack-ceilometer-notification:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5dfh644h96h7dh7dh9dh6h79hd6h64ch645h576h574h586h64ch5bbh68bh5fh5h58bh5dbhdbhd5h5ddh5b5h86h8bhd4h6h5ddh7dh56cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-notification-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gpw9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/notificationhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f7be6d4d-b41b-462c-ac84-16b84a45b63c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:47:06 crc kubenswrapper[4823]: I1206 06:47:06.722746 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9fd7r" event={"ID":"f5301842-d5df-4df6-8699-56f86789df64","Type":"ContainerStarted","Data":"402d507b0a3393646cdc9b117e0bfb305e3b278e50d00ef1db371f76043ae9e7"} Dec 06 06:47:06 crc kubenswrapper[4823]: I1206 06:47:06.764267 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cdc5bf4b4-qft5r" event={"ID":"2bcc21a4-6b09-4804-86d5-85cc7f0267e7","Type":"ContainerStarted","Data":"ef7e20b45fa10c9e0534bf0e943c77a7024e8c1acb561998214014561bcd023a"} Dec 06 06:47:06 crc kubenswrapper[4823]: I1206 06:47:06.770854 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-9fd7r" podStartSLOduration=3.012377071 podStartE2EDuration="1m31.770827965s" podCreationTimestamp="2025-12-06 06:45:35 +0000 UTC" firstStartedPulling="2025-12-06 06:45:36.702264205 +0000 UTC m=+1237.988016165" lastFinishedPulling="2025-12-06 06:47:05.460715099 +0000 UTC m=+1326.746467059" observedRunningTime="2025-12-06 06:47:06.757949103 +0000 UTC m=+1328.043701063" watchObservedRunningTime="2025-12-06 06:47:06.770827965 +0000 UTC m=+1328.056579925" Dec 06 06:47:06 crc kubenswrapper[4823]: I1206 06:47:06.800399 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dcc5c8c58-p6xlr" event={"ID":"4f410137-3943-4e5f-890f-d7f54e165884","Type":"ContainerStarted","Data":"cb6b71f4ea61b1173bfc23246725124edfa9a49e4c5cf5cd806463d5263733b1"} Dec 06 06:47:06 crc kubenswrapper[4823]: I1206 06:47:06.826089 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x7fxv" event={"ID":"3d04e917-34c8-4df1-bc89-69ca7b7753ac","Type":"ContainerStarted","Data":"6fe2554afef990e2feeeb72cb985a1e77b66748179faa0dd0685ef6860aa0271"} Dec 06 06:47:06 crc kubenswrapper[4823]: I1206 06:47:06.864961 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-x7fxv" podStartSLOduration=4.737395932 podStartE2EDuration="1m28.864931214s" podCreationTimestamp="2025-12-06 06:45:38 +0000 UTC" firstStartedPulling="2025-12-06 06:45:41.332297531 +0000 UTC m=+1242.618049491" lastFinishedPulling="2025-12-06 06:47:05.459832813 +0000 UTC m=+1326.745584773" observedRunningTime="2025-12-06 06:47:06.855971325 +0000 UTC m=+1328.141723275" watchObservedRunningTime="2025-12-06 06:47:06.864931214 +0000 UTC m=+1328.150683174" Dec 06 06:47:07 crc kubenswrapper[4823]: I1206 06:47:07.845288 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dcc5c8c58-p6xlr" event={"ID":"4f410137-3943-4e5f-890f-d7f54e165884","Type":"ContainerStarted","Data":"7063357e35d9e1cfbda9096905d536f3d4abc51b8bfd926df2c0dc94056c6689"} Dec 06 06:47:07 crc kubenswrapper[4823]: I1206 06:47:07.854370 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kls2x" event={"ID":"157d2d95-42a3-4f80-8c1d-b8c27bee49be","Type":"ContainerStarted","Data":"ccbc6492c4baaefac97b9f89624d19954c90cc49ec1420e482bc5e684f82b122"} Dec 06 06:47:07 crc kubenswrapper[4823]: I1206 06:47:07.858824 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cdc5bf4b4-qft5r" event={"ID":"2bcc21a4-6b09-4804-86d5-85cc7f0267e7","Type":"ContainerStarted","Data":"a81160012932675fd601ecff4024d99b9f89d28f93683cfb6c8170e7604051be"} Dec 06 06:47:07 crc kubenswrapper[4823]: I1206 06:47:07.885973 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5dcc5c8c58-p6xlr" podStartSLOduration=48.050975237 podStartE2EDuration="1m19.885937657s" podCreationTimestamp="2025-12-06 06:45:48 +0000 UTC" firstStartedPulling="2025-12-06 06:46:33.628644683 +0000 UTC m=+1294.914396643" lastFinishedPulling="2025-12-06 06:47:05.463607103 +0000 UTC m=+1326.749359063" observedRunningTime="2025-12-06 06:47:07.871899491 +0000 UTC m=+1329.157651451" watchObservedRunningTime="2025-12-06 06:47:07.885937657 +0000 UTC m=+1329.171689617" Dec 06 06:47:07 crc kubenswrapper[4823]: I1206 06:47:07.916621 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-cdc5bf4b4-qft5r" podStartSLOduration=47.430985413 podStartE2EDuration="1m19.916592003s" podCreationTimestamp="2025-12-06 06:45:48 +0000 UTC" firstStartedPulling="2025-12-06 06:46:32.973688968 +0000 UTC m=+1294.259440928" lastFinishedPulling="2025-12-06 06:47:05.459295558 +0000 UTC m=+1326.745047518" observedRunningTime="2025-12-06 06:47:07.910106525 +0000 UTC m=+1329.195858485" watchObservedRunningTime="2025-12-06 06:47:07.916592003 +0000 UTC m=+1329.202343963" Dec 06 06:47:07 crc kubenswrapper[4823]: I1206 06:47:07.946860 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-kls2x" podStartSLOduration=5.801900921 podStartE2EDuration="1m29.946761054s" podCreationTimestamp="2025-12-06 06:45:38 +0000 UTC" firstStartedPulling="2025-12-06 06:45:41.700004916 +0000 UTC m=+1242.985756866" lastFinishedPulling="2025-12-06 06:47:05.844865039 +0000 UTC m=+1327.130616999" observedRunningTime="2025-12-06 06:47:07.939956698 +0000 UTC m=+1329.225708658" watchObservedRunningTime="2025-12-06 06:47:07.946761054 +0000 UTC m=+1329.232513024" Dec 06 06:47:08 crc kubenswrapper[4823]: I1206 06:47:08.875018 4823 generic.go:334] "Generic (PLEG): container finished" podID="d1bb321f-d664-4c3d-a312-6a57323199c9" containerID="7b4d311230b0ae04ed859ba4ec9ebfe03e3e8abd52f98c030f298e10c433e534" exitCode=0 Dec 06 06:47:08 crc kubenswrapper[4823]: I1206 06:47:08.875110 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vbx6z" event={"ID":"d1bb321f-d664-4c3d-a312-6a57323199c9","Type":"ContainerDied","Data":"7b4d311230b0ae04ed859ba4ec9ebfe03e3e8abd52f98c030f298e10c433e534"} Dec 06 06:47:08 crc kubenswrapper[4823]: I1206 06:47:08.886817 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:47:08 crc kubenswrapper[4823]: I1206 06:47:08.886902 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:47:09 crc kubenswrapper[4823]: I1206 06:47:09.003884 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:47:09 crc kubenswrapper[4823]: I1206 06:47:09.003966 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:47:14 crc kubenswrapper[4823]: I1206 06:47:14.363634 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vbx6z" Dec 06 06:47:14 crc kubenswrapper[4823]: I1206 06:47:14.396495 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbqrl\" (UniqueName: \"kubernetes.io/projected/d1bb321f-d664-4c3d-a312-6a57323199c9-kube-api-access-pbqrl\") pod \"d1bb321f-d664-4c3d-a312-6a57323199c9\" (UID: \"d1bb321f-d664-4c3d-a312-6a57323199c9\") " Dec 06 06:47:14 crc kubenswrapper[4823]: I1206 06:47:14.396787 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-credential-keys\") pod \"d1bb321f-d664-4c3d-a312-6a57323199c9\" (UID: \"d1bb321f-d664-4c3d-a312-6a57323199c9\") " Dec 06 06:47:14 crc kubenswrapper[4823]: I1206 06:47:14.396820 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-combined-ca-bundle\") pod \"d1bb321f-d664-4c3d-a312-6a57323199c9\" (UID: \"d1bb321f-d664-4c3d-a312-6a57323199c9\") " Dec 06 06:47:14 crc kubenswrapper[4823]: I1206 06:47:14.396934 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-config-data\") pod \"d1bb321f-d664-4c3d-a312-6a57323199c9\" (UID: \"d1bb321f-d664-4c3d-a312-6a57323199c9\") " Dec 06 06:47:14 crc kubenswrapper[4823]: I1206 06:47:14.397095 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-scripts\") pod \"d1bb321f-d664-4c3d-a312-6a57323199c9\" (UID: \"d1bb321f-d664-4c3d-a312-6a57323199c9\") " Dec 06 06:47:14 crc kubenswrapper[4823]: I1206 06:47:14.397163 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-fernet-keys\") pod \"d1bb321f-d664-4c3d-a312-6a57323199c9\" (UID: \"d1bb321f-d664-4c3d-a312-6a57323199c9\") " Dec 06 06:47:14 crc kubenswrapper[4823]: I1206 06:47:14.408003 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1bb321f-d664-4c3d-a312-6a57323199c9-kube-api-access-pbqrl" (OuterVolumeSpecName: "kube-api-access-pbqrl") pod "d1bb321f-d664-4c3d-a312-6a57323199c9" (UID: "d1bb321f-d664-4c3d-a312-6a57323199c9"). InnerVolumeSpecName "kube-api-access-pbqrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:47:14 crc kubenswrapper[4823]: I1206 06:47:14.424029 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d1bb321f-d664-4c3d-a312-6a57323199c9" (UID: "d1bb321f-d664-4c3d-a312-6a57323199c9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:47:14 crc kubenswrapper[4823]: I1206 06:47:14.424146 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-scripts" (OuterVolumeSpecName: "scripts") pod "d1bb321f-d664-4c3d-a312-6a57323199c9" (UID: "d1bb321f-d664-4c3d-a312-6a57323199c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:47:14 crc kubenswrapper[4823]: I1206 06:47:14.424600 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d1bb321f-d664-4c3d-a312-6a57323199c9" (UID: "d1bb321f-d664-4c3d-a312-6a57323199c9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:47:14 crc kubenswrapper[4823]: I1206 06:47:14.433422 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-config-data" (OuterVolumeSpecName: "config-data") pod "d1bb321f-d664-4c3d-a312-6a57323199c9" (UID: "d1bb321f-d664-4c3d-a312-6a57323199c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:47:14 crc kubenswrapper[4823]: I1206 06:47:14.434159 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1bb321f-d664-4c3d-a312-6a57323199c9" (UID: "d1bb321f-d664-4c3d-a312-6a57323199c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:47:14 crc kubenswrapper[4823]: I1206 06:47:14.500479 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:14 crc kubenswrapper[4823]: I1206 06:47:14.500532 4823 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:14 crc kubenswrapper[4823]: I1206 06:47:14.500547 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbqrl\" (UniqueName: \"kubernetes.io/projected/d1bb321f-d664-4c3d-a312-6a57323199c9-kube-api-access-pbqrl\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:14 crc kubenswrapper[4823]: I1206 06:47:14.500586 4823 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:14 crc kubenswrapper[4823]: I1206 06:47:14.500600 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:14 crc kubenswrapper[4823]: I1206 06:47:14.500614 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1bb321f-d664-4c3d-a312-6a57323199c9-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:14 crc kubenswrapper[4823]: I1206 06:47:14.962015 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7be6d4d-b41b-462c-ac84-16b84a45b63c","Type":"ContainerStarted","Data":"754c997b68413895ced7473c1fb4eaa60d7494552d797528c629c1baf99415ba"} Dec 06 06:47:14 crc kubenswrapper[4823]: I1206 06:47:14.966146 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vbx6z" event={"ID":"d1bb321f-d664-4c3d-a312-6a57323199c9","Type":"ContainerDied","Data":"e21af1ab07fc69bff372f479e29190f4bec4c0e19d4ae5085ea21865041569f1"} Dec 06 06:47:14 crc kubenswrapper[4823]: I1206 06:47:14.966180 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e21af1ab07fc69bff372f479e29190f4bec4c0e19d4ae5085ea21865041569f1" Dec 06 06:47:14 crc kubenswrapper[4823]: I1206 06:47:14.966224 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vbx6z" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.540541 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-664df9559f-rrrdk"] Dec 06 06:47:15 crc kubenswrapper[4823]: E1206 06:47:15.541089 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1bb321f-d664-4c3d-a312-6a57323199c9" containerName="keystone-bootstrap" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.541109 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1bb321f-d664-4c3d-a312-6a57323199c9" containerName="keystone-bootstrap" Dec 06 06:47:15 crc kubenswrapper[4823]: E1206 06:47:15.541138 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fc050cb-5e23-4a27-85f6-d95f40e2e237" containerName="dnsmasq-dns" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.541147 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fc050cb-5e23-4a27-85f6-d95f40e2e237" containerName="dnsmasq-dns" Dec 06 06:47:15 crc kubenswrapper[4823]: E1206 06:47:15.541177 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fc050cb-5e23-4a27-85f6-d95f40e2e237" containerName="init" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.541187 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fc050cb-5e23-4a27-85f6-d95f40e2e237" containerName="init" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.541416 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fc050cb-5e23-4a27-85f6-d95f40e2e237" containerName="dnsmasq-dns" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.541450 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1bb321f-d664-4c3d-a312-6a57323199c9" containerName="keystone-bootstrap" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.542347 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.546491 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.546858 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.547168 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.547950 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5tv2h" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.548076 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.548459 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.579055 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-664df9559f-rrrdk"] Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.738558 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8f6baa5f-6712-4665-9753-9a98f2bc5595-credential-keys\") pod \"keystone-664df9559f-rrrdk\" (UID: \"8f6baa5f-6712-4665-9753-9a98f2bc5595\") " pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.739273 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f6baa5f-6712-4665-9753-9a98f2bc5595-internal-tls-certs\") pod \"keystone-664df9559f-rrrdk\" (UID: \"8f6baa5f-6712-4665-9753-9a98f2bc5595\") " pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.739350 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f6baa5f-6712-4665-9753-9a98f2bc5595-public-tls-certs\") pod \"keystone-664df9559f-rrrdk\" (UID: \"8f6baa5f-6712-4665-9753-9a98f2bc5595\") " pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.739470 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f6baa5f-6712-4665-9753-9a98f2bc5595-scripts\") pod \"keystone-664df9559f-rrrdk\" (UID: \"8f6baa5f-6712-4665-9753-9a98f2bc5595\") " pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.739696 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f6baa5f-6712-4665-9753-9a98f2bc5595-config-data\") pod \"keystone-664df9559f-rrrdk\" (UID: \"8f6baa5f-6712-4665-9753-9a98f2bc5595\") " pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.740033 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6baa5f-6712-4665-9753-9a98f2bc5595-combined-ca-bundle\") pod \"keystone-664df9559f-rrrdk\" (UID: \"8f6baa5f-6712-4665-9753-9a98f2bc5595\") " pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.740397 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8f6baa5f-6712-4665-9753-9a98f2bc5595-fernet-keys\") pod \"keystone-664df9559f-rrrdk\" (UID: \"8f6baa5f-6712-4665-9753-9a98f2bc5595\") " pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.740445 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr9hn\" (UniqueName: \"kubernetes.io/projected/8f6baa5f-6712-4665-9753-9a98f2bc5595-kube-api-access-dr9hn\") pod \"keystone-664df9559f-rrrdk\" (UID: \"8f6baa5f-6712-4665-9753-9a98f2bc5595\") " pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.842038 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f6baa5f-6712-4665-9753-9a98f2bc5595-public-tls-certs\") pod \"keystone-664df9559f-rrrdk\" (UID: \"8f6baa5f-6712-4665-9753-9a98f2bc5595\") " pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.842134 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f6baa5f-6712-4665-9753-9a98f2bc5595-scripts\") pod \"keystone-664df9559f-rrrdk\" (UID: \"8f6baa5f-6712-4665-9753-9a98f2bc5595\") " pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.842192 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f6baa5f-6712-4665-9753-9a98f2bc5595-config-data\") pod \"keystone-664df9559f-rrrdk\" (UID: \"8f6baa5f-6712-4665-9753-9a98f2bc5595\") " pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.842273 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6baa5f-6712-4665-9753-9a98f2bc5595-combined-ca-bundle\") pod \"keystone-664df9559f-rrrdk\" (UID: \"8f6baa5f-6712-4665-9753-9a98f2bc5595\") " pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.842359 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8f6baa5f-6712-4665-9753-9a98f2bc5595-fernet-keys\") pod \"keystone-664df9559f-rrrdk\" (UID: \"8f6baa5f-6712-4665-9753-9a98f2bc5595\") " pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.842380 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr9hn\" (UniqueName: \"kubernetes.io/projected/8f6baa5f-6712-4665-9753-9a98f2bc5595-kube-api-access-dr9hn\") pod \"keystone-664df9559f-rrrdk\" (UID: \"8f6baa5f-6712-4665-9753-9a98f2bc5595\") " pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.842436 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8f6baa5f-6712-4665-9753-9a98f2bc5595-credential-keys\") pod \"keystone-664df9559f-rrrdk\" (UID: \"8f6baa5f-6712-4665-9753-9a98f2bc5595\") " pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.842472 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f6baa5f-6712-4665-9753-9a98f2bc5595-internal-tls-certs\") pod \"keystone-664df9559f-rrrdk\" (UID: \"8f6baa5f-6712-4665-9753-9a98f2bc5595\") " pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.848270 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f6baa5f-6712-4665-9753-9a98f2bc5595-internal-tls-certs\") pod \"keystone-664df9559f-rrrdk\" (UID: \"8f6baa5f-6712-4665-9753-9a98f2bc5595\") " pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.848486 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8f6baa5f-6712-4665-9753-9a98f2bc5595-credential-keys\") pod \"keystone-664df9559f-rrrdk\" (UID: \"8f6baa5f-6712-4665-9753-9a98f2bc5595\") " pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.849558 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f6baa5f-6712-4665-9753-9a98f2bc5595-public-tls-certs\") pod \"keystone-664df9559f-rrrdk\" (UID: \"8f6baa5f-6712-4665-9753-9a98f2bc5595\") " pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.849699 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f6baa5f-6712-4665-9753-9a98f2bc5595-scripts\") pod \"keystone-664df9559f-rrrdk\" (UID: \"8f6baa5f-6712-4665-9753-9a98f2bc5595\") " pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.851883 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8f6baa5f-6712-4665-9753-9a98f2bc5595-fernet-keys\") pod \"keystone-664df9559f-rrrdk\" (UID: \"8f6baa5f-6712-4665-9753-9a98f2bc5595\") " pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.854376 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6baa5f-6712-4665-9753-9a98f2bc5595-combined-ca-bundle\") pod \"keystone-664df9559f-rrrdk\" (UID: \"8f6baa5f-6712-4665-9753-9a98f2bc5595\") " pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.854468 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f6baa5f-6712-4665-9753-9a98f2bc5595-config-data\") pod \"keystone-664df9559f-rrrdk\" (UID: \"8f6baa5f-6712-4665-9753-9a98f2bc5595\") " pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.866381 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr9hn\" (UniqueName: \"kubernetes.io/projected/8f6baa5f-6712-4665-9753-9a98f2bc5595-kube-api-access-dr9hn\") pod \"keystone-664df9559f-rrrdk\" (UID: \"8f6baa5f-6712-4665-9753-9a98f2bc5595\") " pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:15 crc kubenswrapper[4823]: I1206 06:47:15.873651 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:16 crc kubenswrapper[4823]: I1206 06:47:16.445628 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-664df9559f-rrrdk"] Dec 06 06:47:16 crc kubenswrapper[4823]: I1206 06:47:16.993414 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-664df9559f-rrrdk" event={"ID":"8f6baa5f-6712-4665-9753-9a98f2bc5595","Type":"ContainerStarted","Data":"0679706393dd65be4ad786a0917186bae4c14f5c2d2704b87b6b28724646c064"} Dec 06 06:47:16 crc kubenswrapper[4823]: I1206 06:47:16.993736 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-664df9559f-rrrdk" event={"ID":"8f6baa5f-6712-4665-9753-9a98f2bc5595","Type":"ContainerStarted","Data":"4a34ebd6accbc0933e9c59220b1469fed2f2d9039206f480a9cb332dafbca95c"} Dec 06 06:47:16 crc kubenswrapper[4823]: I1206 06:47:16.994992 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:17 crc kubenswrapper[4823]: I1206 06:47:17.034345 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-664df9559f-rrrdk" podStartSLOduration=2.034314742 podStartE2EDuration="2.034314742s" podCreationTimestamp="2025-12-06 06:47:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:47:17.022929813 +0000 UTC m=+1338.308681783" watchObservedRunningTime="2025-12-06 06:47:17.034314742 +0000 UTC m=+1338.320066702" Dec 06 06:47:18 crc kubenswrapper[4823]: I1206 06:47:18.889482 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-cdc5bf4b4-qft5r" podUID="2bcc21a4-6b09-4804-86d5-85cc7f0267e7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.157:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.157:8443: connect: connection refused" Dec 06 06:47:19 crc kubenswrapper[4823]: I1206 06:47:19.006772 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5dcc5c8c58-p6xlr" podUID="4f410137-3943-4e5f-890f-d7f54e165884" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.158:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.158:8443: connect: connection refused" Dec 06 06:47:25 crc kubenswrapper[4823]: E1206 06:47:25.550207 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"ceilometer-notification-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="f7be6d4d-b41b-462c-ac84-16b84a45b63c" Dec 06 06:47:26 crc kubenswrapper[4823]: I1206 06:47:26.113185 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7be6d4d-b41b-462c-ac84-16b84a45b63c","Type":"ContainerStarted","Data":"b6ee41d811a53aa8e29a2742fe9c34b77169f7e4dcb374e073297fede3d5df4d"} Dec 06 06:47:26 crc kubenswrapper[4823]: I1206 06:47:26.113301 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f7be6d4d-b41b-462c-ac84-16b84a45b63c" containerName="sg-core" containerID="cri-o://754c997b68413895ced7473c1fb4eaa60d7494552d797528c629c1baf99415ba" gracePeriod=30 Dec 06 06:47:26 crc kubenswrapper[4823]: I1206 06:47:26.113408 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f7be6d4d-b41b-462c-ac84-16b84a45b63c" containerName="proxy-httpd" containerID="cri-o://b6ee41d811a53aa8e29a2742fe9c34b77169f7e4dcb374e073297fede3d5df4d" gracePeriod=30 Dec 06 06:47:26 crc kubenswrapper[4823]: I1206 06:47:26.113726 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 06:47:26 crc kubenswrapper[4823]: I1206 06:47:26.584860 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 06:47:26 crc kubenswrapper[4823]: I1206 06:47:26.699729 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7be6d4d-b41b-462c-ac84-16b84a45b63c-run-httpd\") pod \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " Dec 06 06:47:26 crc kubenswrapper[4823]: I1206 06:47:26.699881 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7be6d4d-b41b-462c-ac84-16b84a45b63c-log-httpd\") pod \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " Dec 06 06:47:26 crc kubenswrapper[4823]: I1206 06:47:26.700217 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpw9s\" (UniqueName: \"kubernetes.io/projected/f7be6d4d-b41b-462c-ac84-16b84a45b63c-kube-api-access-gpw9s\") pod \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " Dec 06 06:47:26 crc kubenswrapper[4823]: I1206 06:47:26.700311 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7be6d4d-b41b-462c-ac84-16b84a45b63c-config-data\") pod \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " Dec 06 06:47:26 crc kubenswrapper[4823]: I1206 06:47:26.700354 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7be6d4d-b41b-462c-ac84-16b84a45b63c-sg-core-conf-yaml\") pod \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " Dec 06 06:47:26 crc kubenswrapper[4823]: I1206 06:47:26.700482 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7be6d4d-b41b-462c-ac84-16b84a45b63c-scripts\") pod \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " Dec 06 06:47:26 crc kubenswrapper[4823]: I1206 06:47:26.700498 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7be6d4d-b41b-462c-ac84-16b84a45b63c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f7be6d4d-b41b-462c-ac84-16b84a45b63c" (UID: "f7be6d4d-b41b-462c-ac84-16b84a45b63c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:47:26 crc kubenswrapper[4823]: I1206 06:47:26.700512 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7be6d4d-b41b-462c-ac84-16b84a45b63c-combined-ca-bundle\") pod \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\" (UID: \"f7be6d4d-b41b-462c-ac84-16b84a45b63c\") " Dec 06 06:47:26 crc kubenswrapper[4823]: I1206 06:47:26.700756 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7be6d4d-b41b-462c-ac84-16b84a45b63c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f7be6d4d-b41b-462c-ac84-16b84a45b63c" (UID: "f7be6d4d-b41b-462c-ac84-16b84a45b63c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:47:26 crc kubenswrapper[4823]: I1206 06:47:26.701386 4823 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7be6d4d-b41b-462c-ac84-16b84a45b63c-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:26 crc kubenswrapper[4823]: I1206 06:47:26.701411 4823 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7be6d4d-b41b-462c-ac84-16b84a45b63c-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:26 crc kubenswrapper[4823]: I1206 06:47:26.707434 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7be6d4d-b41b-462c-ac84-16b84a45b63c-scripts" (OuterVolumeSpecName: "scripts") pod "f7be6d4d-b41b-462c-ac84-16b84a45b63c" (UID: "f7be6d4d-b41b-462c-ac84-16b84a45b63c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:47:26 crc kubenswrapper[4823]: I1206 06:47:26.710957 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7be6d4d-b41b-462c-ac84-16b84a45b63c-kube-api-access-gpw9s" (OuterVolumeSpecName: "kube-api-access-gpw9s") pod "f7be6d4d-b41b-462c-ac84-16b84a45b63c" (UID: "f7be6d4d-b41b-462c-ac84-16b84a45b63c"). InnerVolumeSpecName "kube-api-access-gpw9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:47:26 crc kubenswrapper[4823]: I1206 06:47:26.731488 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7be6d4d-b41b-462c-ac84-16b84a45b63c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f7be6d4d-b41b-462c-ac84-16b84a45b63c" (UID: "f7be6d4d-b41b-462c-ac84-16b84a45b63c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:47:26 crc kubenswrapper[4823]: I1206 06:47:26.736358 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7be6d4d-b41b-462c-ac84-16b84a45b63c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7be6d4d-b41b-462c-ac84-16b84a45b63c" (UID: "f7be6d4d-b41b-462c-ac84-16b84a45b63c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:47:26 crc kubenswrapper[4823]: I1206 06:47:26.772840 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7be6d4d-b41b-462c-ac84-16b84a45b63c-config-data" (OuterVolumeSpecName: "config-data") pod "f7be6d4d-b41b-462c-ac84-16b84a45b63c" (UID: "f7be6d4d-b41b-462c-ac84-16b84a45b63c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:47:26 crc kubenswrapper[4823]: I1206 06:47:26.843437 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7be6d4d-b41b-462c-ac84-16b84a45b63c-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:26 crc kubenswrapper[4823]: I1206 06:47:26.843477 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7be6d4d-b41b-462c-ac84-16b84a45b63c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:26 crc kubenswrapper[4823]: I1206 06:47:26.843492 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpw9s\" (UniqueName: \"kubernetes.io/projected/f7be6d4d-b41b-462c-ac84-16b84a45b63c-kube-api-access-gpw9s\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:26 crc kubenswrapper[4823]: I1206 06:47:26.843504 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7be6d4d-b41b-462c-ac84-16b84a45b63c-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:26 crc kubenswrapper[4823]: I1206 06:47:26.843516 4823 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7be6d4d-b41b-462c-ac84-16b84a45b63c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.130066 4823 generic.go:334] "Generic (PLEG): container finished" podID="f7be6d4d-b41b-462c-ac84-16b84a45b63c" containerID="b6ee41d811a53aa8e29a2742fe9c34b77169f7e4dcb374e073297fede3d5df4d" exitCode=0 Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.130109 4823 generic.go:334] "Generic (PLEG): container finished" podID="f7be6d4d-b41b-462c-ac84-16b84a45b63c" containerID="754c997b68413895ced7473c1fb4eaa60d7494552d797528c629c1baf99415ba" exitCode=2 Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.130229 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.130236 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7be6d4d-b41b-462c-ac84-16b84a45b63c","Type":"ContainerDied","Data":"b6ee41d811a53aa8e29a2742fe9c34b77169f7e4dcb374e073297fede3d5df4d"} Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.130314 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7be6d4d-b41b-462c-ac84-16b84a45b63c","Type":"ContainerDied","Data":"754c997b68413895ced7473c1fb4eaa60d7494552d797528c629c1baf99415ba"} Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.130333 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7be6d4d-b41b-462c-ac84-16b84a45b63c","Type":"ContainerDied","Data":"4a8bfd6bda53bff6df1e180820c142fc287cf303cbe4b685c129cec4724dd201"} Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.130360 4823 scope.go:117] "RemoveContainer" containerID="b6ee41d811a53aa8e29a2742fe9c34b77169f7e4dcb374e073297fede3d5df4d" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.134047 4823 generic.go:334] "Generic (PLEG): container finished" podID="2955103b-2cae-4fe0-8ffe-bbca608cad77" containerID="5ea99d5abc9e427ab63f79f7f81597a8a0063395441f32474ab0dceb630285d5" exitCode=0 Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.134089 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-h7n5l" event={"ID":"2955103b-2cae-4fe0-8ffe-bbca608cad77","Type":"ContainerDied","Data":"5ea99d5abc9e427ab63f79f7f81597a8a0063395441f32474ab0dceb630285d5"} Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.137212 4823 generic.go:334] "Generic (PLEG): container finished" podID="aaeee530-df36-4fc7-96d5-b93755e8c4fe" containerID="8a2f5be1440fbacc044e3c6bce78b35114f3e3e6962e6cecb11f639db268e959" exitCode=0 Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.137236 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-26bwc" event={"ID":"aaeee530-df36-4fc7-96d5-b93755e8c4fe","Type":"ContainerDied","Data":"8a2f5be1440fbacc044e3c6bce78b35114f3e3e6962e6cecb11f639db268e959"} Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.155098 4823 scope.go:117] "RemoveContainer" containerID="754c997b68413895ced7473c1fb4eaa60d7494552d797528c629c1baf99415ba" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.178531 4823 scope.go:117] "RemoveContainer" containerID="b6ee41d811a53aa8e29a2742fe9c34b77169f7e4dcb374e073297fede3d5df4d" Dec 06 06:47:27 crc kubenswrapper[4823]: E1206 06:47:27.179258 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6ee41d811a53aa8e29a2742fe9c34b77169f7e4dcb374e073297fede3d5df4d\": container with ID starting with b6ee41d811a53aa8e29a2742fe9c34b77169f7e4dcb374e073297fede3d5df4d not found: ID does not exist" containerID="b6ee41d811a53aa8e29a2742fe9c34b77169f7e4dcb374e073297fede3d5df4d" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.179318 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6ee41d811a53aa8e29a2742fe9c34b77169f7e4dcb374e073297fede3d5df4d"} err="failed to get container status \"b6ee41d811a53aa8e29a2742fe9c34b77169f7e4dcb374e073297fede3d5df4d\": rpc error: code = NotFound desc = could not find container \"b6ee41d811a53aa8e29a2742fe9c34b77169f7e4dcb374e073297fede3d5df4d\": container with ID starting with b6ee41d811a53aa8e29a2742fe9c34b77169f7e4dcb374e073297fede3d5df4d not found: ID does not exist" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.179344 4823 scope.go:117] "RemoveContainer" containerID="754c997b68413895ced7473c1fb4eaa60d7494552d797528c629c1baf99415ba" Dec 06 06:47:27 crc kubenswrapper[4823]: E1206 06:47:27.179901 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"754c997b68413895ced7473c1fb4eaa60d7494552d797528c629c1baf99415ba\": container with ID starting with 754c997b68413895ced7473c1fb4eaa60d7494552d797528c629c1baf99415ba not found: ID does not exist" containerID="754c997b68413895ced7473c1fb4eaa60d7494552d797528c629c1baf99415ba" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.179937 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"754c997b68413895ced7473c1fb4eaa60d7494552d797528c629c1baf99415ba"} err="failed to get container status \"754c997b68413895ced7473c1fb4eaa60d7494552d797528c629c1baf99415ba\": rpc error: code = NotFound desc = could not find container \"754c997b68413895ced7473c1fb4eaa60d7494552d797528c629c1baf99415ba\": container with ID starting with 754c997b68413895ced7473c1fb4eaa60d7494552d797528c629c1baf99415ba not found: ID does not exist" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.179957 4823 scope.go:117] "RemoveContainer" containerID="b6ee41d811a53aa8e29a2742fe9c34b77169f7e4dcb374e073297fede3d5df4d" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.180214 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6ee41d811a53aa8e29a2742fe9c34b77169f7e4dcb374e073297fede3d5df4d"} err="failed to get container status \"b6ee41d811a53aa8e29a2742fe9c34b77169f7e4dcb374e073297fede3d5df4d\": rpc error: code = NotFound desc = could not find container \"b6ee41d811a53aa8e29a2742fe9c34b77169f7e4dcb374e073297fede3d5df4d\": container with ID starting with b6ee41d811a53aa8e29a2742fe9c34b77169f7e4dcb374e073297fede3d5df4d not found: ID does not exist" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.180264 4823 scope.go:117] "RemoveContainer" containerID="754c997b68413895ced7473c1fb4eaa60d7494552d797528c629c1baf99415ba" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.180739 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"754c997b68413895ced7473c1fb4eaa60d7494552d797528c629c1baf99415ba"} err="failed to get container status \"754c997b68413895ced7473c1fb4eaa60d7494552d797528c629c1baf99415ba\": rpc error: code = NotFound desc = could not find container \"754c997b68413895ced7473c1fb4eaa60d7494552d797528c629c1baf99415ba\": container with ID starting with 754c997b68413895ced7473c1fb4eaa60d7494552d797528c629c1baf99415ba not found: ID does not exist" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.254499 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.274095 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.289631 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:47:27 crc kubenswrapper[4823]: E1206 06:47:27.290245 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7be6d4d-b41b-462c-ac84-16b84a45b63c" containerName="proxy-httpd" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.290274 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7be6d4d-b41b-462c-ac84-16b84a45b63c" containerName="proxy-httpd" Dec 06 06:47:27 crc kubenswrapper[4823]: E1206 06:47:27.290314 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7be6d4d-b41b-462c-ac84-16b84a45b63c" containerName="sg-core" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.290323 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7be6d4d-b41b-462c-ac84-16b84a45b63c" containerName="sg-core" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.290557 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7be6d4d-b41b-462c-ac84-16b84a45b63c" containerName="sg-core" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.290582 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7be6d4d-b41b-462c-ac84-16b84a45b63c" containerName="proxy-httpd" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.294082 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.298933 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.299056 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.304441 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.460764 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/335c336e-79ff-426e-a360-0c0ea58e8941-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " pod="openstack/ceilometer-0" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.460972 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gvlg\" (UniqueName: \"kubernetes.io/projected/335c336e-79ff-426e-a360-0c0ea58e8941-kube-api-access-4gvlg\") pod \"ceilometer-0\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " pod="openstack/ceilometer-0" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.461016 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/335c336e-79ff-426e-a360-0c0ea58e8941-run-httpd\") pod \"ceilometer-0\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " pod="openstack/ceilometer-0" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.461051 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/335c336e-79ff-426e-a360-0c0ea58e8941-scripts\") pod \"ceilometer-0\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " pod="openstack/ceilometer-0" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.461083 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/335c336e-79ff-426e-a360-0c0ea58e8941-config-data\") pod \"ceilometer-0\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " pod="openstack/ceilometer-0" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.461309 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/335c336e-79ff-426e-a360-0c0ea58e8941-log-httpd\") pod \"ceilometer-0\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " pod="openstack/ceilometer-0" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.461433 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/335c336e-79ff-426e-a360-0c0ea58e8941-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " pod="openstack/ceilometer-0" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.563472 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/335c336e-79ff-426e-a360-0c0ea58e8941-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " pod="openstack/ceilometer-0" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.563579 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gvlg\" (UniqueName: \"kubernetes.io/projected/335c336e-79ff-426e-a360-0c0ea58e8941-kube-api-access-4gvlg\") pod \"ceilometer-0\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " pod="openstack/ceilometer-0" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.563608 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/335c336e-79ff-426e-a360-0c0ea58e8941-run-httpd\") pod \"ceilometer-0\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " pod="openstack/ceilometer-0" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.563633 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/335c336e-79ff-426e-a360-0c0ea58e8941-scripts\") pod \"ceilometer-0\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " pod="openstack/ceilometer-0" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.563651 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/335c336e-79ff-426e-a360-0c0ea58e8941-config-data\") pod \"ceilometer-0\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " pod="openstack/ceilometer-0" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.563697 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/335c336e-79ff-426e-a360-0c0ea58e8941-log-httpd\") pod \"ceilometer-0\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " pod="openstack/ceilometer-0" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.563724 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/335c336e-79ff-426e-a360-0c0ea58e8941-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " pod="openstack/ceilometer-0" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.564234 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/335c336e-79ff-426e-a360-0c0ea58e8941-run-httpd\") pod \"ceilometer-0\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " pod="openstack/ceilometer-0" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.564718 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/335c336e-79ff-426e-a360-0c0ea58e8941-log-httpd\") pod \"ceilometer-0\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " pod="openstack/ceilometer-0" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.568202 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/335c336e-79ff-426e-a360-0c0ea58e8941-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " pod="openstack/ceilometer-0" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.568548 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/335c336e-79ff-426e-a360-0c0ea58e8941-config-data\") pod \"ceilometer-0\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " pod="openstack/ceilometer-0" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.570212 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/335c336e-79ff-426e-a360-0c0ea58e8941-scripts\") pod \"ceilometer-0\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " pod="openstack/ceilometer-0" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.579446 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/335c336e-79ff-426e-a360-0c0ea58e8941-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " pod="openstack/ceilometer-0" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.592331 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gvlg\" (UniqueName: \"kubernetes.io/projected/335c336e-79ff-426e-a360-0c0ea58e8941-kube-api-access-4gvlg\") pod \"ceilometer-0\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " pod="openstack/ceilometer-0" Dec 06 06:47:27 crc kubenswrapper[4823]: I1206 06:47:27.619443 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.164465 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:47:28 crc kubenswrapper[4823]: W1206 06:47:28.194372 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod335c336e_79ff_426e_a360_0c0ea58e8941.slice/crio-12ca0d95d1957d9526cdbc0ac2d538bd9891fa3f24a0145390eaa7755fb9d89d WatchSource:0}: Error finding container 12ca0d95d1957d9526cdbc0ac2d538bd9891fa3f24a0145390eaa7755fb9d89d: Status 404 returned error can't find the container with id 12ca0d95d1957d9526cdbc0ac2d538bd9891fa3f24a0145390eaa7755fb9d89d Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.673848 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-h7n5l" Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.687368 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-26bwc" Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.802123 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaeee530-df36-4fc7-96d5-b93755e8c4fe-combined-ca-bundle\") pod \"aaeee530-df36-4fc7-96d5-b93755e8c4fe\" (UID: \"aaeee530-df36-4fc7-96d5-b93755e8c4fe\") " Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.802213 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2955103b-2cae-4fe0-8ffe-bbca608cad77-logs\") pod \"2955103b-2cae-4fe0-8ffe-bbca608cad77\" (UID: \"2955103b-2cae-4fe0-8ffe-bbca608cad77\") " Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.802279 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2955103b-2cae-4fe0-8ffe-bbca608cad77-config-data\") pod \"2955103b-2cae-4fe0-8ffe-bbca608cad77\" (UID: \"2955103b-2cae-4fe0-8ffe-bbca608cad77\") " Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.802319 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2955103b-2cae-4fe0-8ffe-bbca608cad77-combined-ca-bundle\") pod \"2955103b-2cae-4fe0-8ffe-bbca608cad77\" (UID: \"2955103b-2cae-4fe0-8ffe-bbca608cad77\") " Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.802348 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld7fj\" (UniqueName: \"kubernetes.io/projected/aaeee530-df36-4fc7-96d5-b93755e8c4fe-kube-api-access-ld7fj\") pod \"aaeee530-df36-4fc7-96d5-b93755e8c4fe\" (UID: \"aaeee530-df36-4fc7-96d5-b93755e8c4fe\") " Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.802929 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2955103b-2cae-4fe0-8ffe-bbca608cad77-logs" (OuterVolumeSpecName: "logs") pod "2955103b-2cae-4fe0-8ffe-bbca608cad77" (UID: "2955103b-2cae-4fe0-8ffe-bbca608cad77"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.803034 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2955103b-2cae-4fe0-8ffe-bbca608cad77-scripts\") pod \"2955103b-2cae-4fe0-8ffe-bbca608cad77\" (UID: \"2955103b-2cae-4fe0-8ffe-bbca608cad77\") " Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.803183 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j5pp\" (UniqueName: \"kubernetes.io/projected/2955103b-2cae-4fe0-8ffe-bbca608cad77-kube-api-access-8j5pp\") pod \"2955103b-2cae-4fe0-8ffe-bbca608cad77\" (UID: \"2955103b-2cae-4fe0-8ffe-bbca608cad77\") " Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.803234 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aaeee530-df36-4fc7-96d5-b93755e8c4fe-db-sync-config-data\") pod \"aaeee530-df36-4fc7-96d5-b93755e8c4fe\" (UID: \"aaeee530-df36-4fc7-96d5-b93755e8c4fe\") " Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.803327 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaeee530-df36-4fc7-96d5-b93755e8c4fe-config-data\") pod \"aaeee530-df36-4fc7-96d5-b93755e8c4fe\" (UID: \"aaeee530-df36-4fc7-96d5-b93755e8c4fe\") " Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.804545 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2955103b-2cae-4fe0-8ffe-bbca608cad77-logs\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.810251 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2955103b-2cae-4fe0-8ffe-bbca608cad77-kube-api-access-8j5pp" (OuterVolumeSpecName: "kube-api-access-8j5pp") pod "2955103b-2cae-4fe0-8ffe-bbca608cad77" (UID: "2955103b-2cae-4fe0-8ffe-bbca608cad77"). InnerVolumeSpecName "kube-api-access-8j5pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.810261 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2955103b-2cae-4fe0-8ffe-bbca608cad77-scripts" (OuterVolumeSpecName: "scripts") pod "2955103b-2cae-4fe0-8ffe-bbca608cad77" (UID: "2955103b-2cae-4fe0-8ffe-bbca608cad77"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.810919 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaeee530-df36-4fc7-96d5-b93755e8c4fe-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "aaeee530-df36-4fc7-96d5-b93755e8c4fe" (UID: "aaeee530-df36-4fc7-96d5-b93755e8c4fe"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.812216 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaeee530-df36-4fc7-96d5-b93755e8c4fe-kube-api-access-ld7fj" (OuterVolumeSpecName: "kube-api-access-ld7fj") pod "aaeee530-df36-4fc7-96d5-b93755e8c4fe" (UID: "aaeee530-df36-4fc7-96d5-b93755e8c4fe"). InnerVolumeSpecName "kube-api-access-ld7fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.839817 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2955103b-2cae-4fe0-8ffe-bbca608cad77-config-data" (OuterVolumeSpecName: "config-data") pod "2955103b-2cae-4fe0-8ffe-bbca608cad77" (UID: "2955103b-2cae-4fe0-8ffe-bbca608cad77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.840076 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2955103b-2cae-4fe0-8ffe-bbca608cad77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2955103b-2cae-4fe0-8ffe-bbca608cad77" (UID: "2955103b-2cae-4fe0-8ffe-bbca608cad77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.841240 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaeee530-df36-4fc7-96d5-b93755e8c4fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aaeee530-df36-4fc7-96d5-b93755e8c4fe" (UID: "aaeee530-df36-4fc7-96d5-b93755e8c4fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.866006 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaeee530-df36-4fc7-96d5-b93755e8c4fe-config-data" (OuterVolumeSpecName: "config-data") pod "aaeee530-df36-4fc7-96d5-b93755e8c4fe" (UID: "aaeee530-df36-4fc7-96d5-b93755e8c4fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.907348 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2955103b-2cae-4fe0-8ffe-bbca608cad77-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.907398 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2955103b-2cae-4fe0-8ffe-bbca608cad77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.907413 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld7fj\" (UniqueName: \"kubernetes.io/projected/aaeee530-df36-4fc7-96d5-b93755e8c4fe-kube-api-access-ld7fj\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.907423 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2955103b-2cae-4fe0-8ffe-bbca608cad77-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.907437 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j5pp\" (UniqueName: \"kubernetes.io/projected/2955103b-2cae-4fe0-8ffe-bbca608cad77-kube-api-access-8j5pp\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.907446 4823 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aaeee530-df36-4fc7-96d5-b93755e8c4fe-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.907455 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaeee530-df36-4fc7-96d5-b93755e8c4fe-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:28 crc kubenswrapper[4823]: I1206 06:47:28.907463 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaeee530-df36-4fc7-96d5-b93755e8c4fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.190183 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7be6d4d-b41b-462c-ac84-16b84a45b63c" path="/var/lib/kubelet/pods/f7be6d4d-b41b-462c-ac84-16b84a45b63c/volumes" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.192630 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"335c336e-79ff-426e-a360-0c0ea58e8941","Type":"ContainerStarted","Data":"12ca0d95d1957d9526cdbc0ac2d538bd9891fa3f24a0145390eaa7755fb9d89d"} Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.196243 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-h7n5l" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.197754 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-h7n5l" event={"ID":"2955103b-2cae-4fe0-8ffe-bbca608cad77","Type":"ContainerDied","Data":"18d758e9ed541e7973f16c018ece0c84bf6aa7109d3d367225aeac5d08c445a8"} Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.197830 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18d758e9ed541e7973f16c018ece0c84bf6aa7109d3d367225aeac5d08c445a8" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.210581 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-26bwc" event={"ID":"aaeee530-df36-4fc7-96d5-b93755e8c4fe","Type":"ContainerDied","Data":"f6e7d040377344c80de637ddec936c026b2b2bc0500e0884b5ff776cbbb11864"} Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.210724 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6e7d040377344c80de637ddec936c026b2b2bc0500e0884b5ff776cbbb11864" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.210862 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-26bwc" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.298654 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-769d66bc44-mzlht"] Dec 06 06:47:29 crc kubenswrapper[4823]: E1206 06:47:29.299159 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2955103b-2cae-4fe0-8ffe-bbca608cad77" containerName="placement-db-sync" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.299180 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2955103b-2cae-4fe0-8ffe-bbca608cad77" containerName="placement-db-sync" Dec 06 06:47:29 crc kubenswrapper[4823]: E1206 06:47:29.299189 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaeee530-df36-4fc7-96d5-b93755e8c4fe" containerName="watcher-db-sync" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.299197 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaeee530-df36-4fc7-96d5-b93755e8c4fe" containerName="watcher-db-sync" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.299413 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="2955103b-2cae-4fe0-8ffe-bbca608cad77" containerName="placement-db-sync" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.299442 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaeee530-df36-4fc7-96d5-b93755e8c4fe" containerName="watcher-db-sync" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.300705 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.307342 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.307539 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.307554 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-gg7q4" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.307633 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.307750 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.338336 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-769d66bc44-mzlht"] Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.522083 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc00597e-057e-4c1b-83aa-435c9e5184be-internal-tls-certs\") pod \"placement-769d66bc44-mzlht\" (UID: \"dc00597e-057e-4c1b-83aa-435c9e5184be\") " pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.522488 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc00597e-057e-4c1b-83aa-435c9e5184be-scripts\") pod \"placement-769d66bc44-mzlht\" (UID: \"dc00597e-057e-4c1b-83aa-435c9e5184be\") " pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.522575 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xlrk\" (UniqueName: \"kubernetes.io/projected/dc00597e-057e-4c1b-83aa-435c9e5184be-kube-api-access-2xlrk\") pod \"placement-769d66bc44-mzlht\" (UID: \"dc00597e-057e-4c1b-83aa-435c9e5184be\") " pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.522607 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc00597e-057e-4c1b-83aa-435c9e5184be-logs\") pod \"placement-769d66bc44-mzlht\" (UID: \"dc00597e-057e-4c1b-83aa-435c9e5184be\") " pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.522623 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc00597e-057e-4c1b-83aa-435c9e5184be-combined-ca-bundle\") pod \"placement-769d66bc44-mzlht\" (UID: \"dc00597e-057e-4c1b-83aa-435c9e5184be\") " pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.522649 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc00597e-057e-4c1b-83aa-435c9e5184be-config-data\") pod \"placement-769d66bc44-mzlht\" (UID: \"dc00597e-057e-4c1b-83aa-435c9e5184be\") " pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.522679 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc00597e-057e-4c1b-83aa-435c9e5184be-public-tls-certs\") pod \"placement-769d66bc44-mzlht\" (UID: \"dc00597e-057e-4c1b-83aa-435c9e5184be\") " pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.625884 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc00597e-057e-4c1b-83aa-435c9e5184be-internal-tls-certs\") pod \"placement-769d66bc44-mzlht\" (UID: \"dc00597e-057e-4c1b-83aa-435c9e5184be\") " pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.626035 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc00597e-057e-4c1b-83aa-435c9e5184be-scripts\") pod \"placement-769d66bc44-mzlht\" (UID: \"dc00597e-057e-4c1b-83aa-435c9e5184be\") " pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.626146 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xlrk\" (UniqueName: \"kubernetes.io/projected/dc00597e-057e-4c1b-83aa-435c9e5184be-kube-api-access-2xlrk\") pod \"placement-769d66bc44-mzlht\" (UID: \"dc00597e-057e-4c1b-83aa-435c9e5184be\") " pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.626196 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc00597e-057e-4c1b-83aa-435c9e5184be-logs\") pod \"placement-769d66bc44-mzlht\" (UID: \"dc00597e-057e-4c1b-83aa-435c9e5184be\") " pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.626226 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc00597e-057e-4c1b-83aa-435c9e5184be-combined-ca-bundle\") pod \"placement-769d66bc44-mzlht\" (UID: \"dc00597e-057e-4c1b-83aa-435c9e5184be\") " pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.626258 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc00597e-057e-4c1b-83aa-435c9e5184be-config-data\") pod \"placement-769d66bc44-mzlht\" (UID: \"dc00597e-057e-4c1b-83aa-435c9e5184be\") " pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.626277 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc00597e-057e-4c1b-83aa-435c9e5184be-public-tls-certs\") pod \"placement-769d66bc44-mzlht\" (UID: \"dc00597e-057e-4c1b-83aa-435c9e5184be\") " pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.631532 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc00597e-057e-4c1b-83aa-435c9e5184be-logs\") pod \"placement-769d66bc44-mzlht\" (UID: \"dc00597e-057e-4c1b-83aa-435c9e5184be\") " pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.636935 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc00597e-057e-4c1b-83aa-435c9e5184be-public-tls-certs\") pod \"placement-769d66bc44-mzlht\" (UID: \"dc00597e-057e-4c1b-83aa-435c9e5184be\") " pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.637354 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc00597e-057e-4c1b-83aa-435c9e5184be-scripts\") pod \"placement-769d66bc44-mzlht\" (UID: \"dc00597e-057e-4c1b-83aa-435c9e5184be\") " pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.644213 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc00597e-057e-4c1b-83aa-435c9e5184be-combined-ca-bundle\") pod \"placement-769d66bc44-mzlht\" (UID: \"dc00597e-057e-4c1b-83aa-435c9e5184be\") " pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.644472 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc00597e-057e-4c1b-83aa-435c9e5184be-config-data\") pod \"placement-769d66bc44-mzlht\" (UID: \"dc00597e-057e-4c1b-83aa-435c9e5184be\") " pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.644595 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc00597e-057e-4c1b-83aa-435c9e5184be-internal-tls-certs\") pod \"placement-769d66bc44-mzlht\" (UID: \"dc00597e-057e-4c1b-83aa-435c9e5184be\") " pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.658831 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xlrk\" (UniqueName: \"kubernetes.io/projected/dc00597e-057e-4c1b-83aa-435c9e5184be-kube-api-access-2xlrk\") pod \"placement-769d66bc44-mzlht\" (UID: \"dc00597e-057e-4c1b-83aa-435c9e5184be\") " pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.853584 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.860010 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.875508 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-r6nnd" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.880429 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.899997 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.914839 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.917236 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.929750 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.937704 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.946353 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-logs\") pod \"watcher-decision-engine-0\" (UID: \"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.946432 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8db6f8d1-3006-4e17-a979-7777a0919c7e-config-data\") pod \"watcher-api-0\" (UID: \"8db6f8d1-3006-4e17-a979-7777a0919c7e\") " pod="openstack/watcher-api-0" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.946488 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjzmv\" (UniqueName: \"kubernetes.io/projected/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-kube-api-access-tjzmv\") pod \"watcher-decision-engine-0\" (UID: \"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.946520 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8db6f8d1-3006-4e17-a979-7777a0919c7e-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"8db6f8d1-3006-4e17-a979-7777a0919c7e\") " pod="openstack/watcher-api-0" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.946740 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8db6f8d1-3006-4e17-a979-7777a0919c7e-logs\") pod \"watcher-api-0\" (UID: \"8db6f8d1-3006-4e17-a979-7777a0919c7e\") " pod="openstack/watcher-api-0" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.946767 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.946796 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db6f8d1-3006-4e17-a979-7777a0919c7e-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"8db6f8d1-3006-4e17-a979-7777a0919c7e\") " pod="openstack/watcher-api-0" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.946826 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.946863 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-config-data\") pod \"watcher-decision-engine-0\" (UID: \"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.946894 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whfxs\" (UniqueName: \"kubernetes.io/projected/8db6f8d1-3006-4e17-a979-7777a0919c7e-kube-api-access-whfxs\") pod \"watcher-api-0\" (UID: \"8db6f8d1-3006-4e17-a979-7777a0919c7e\") " pod="openstack/watcher-api-0" Dec 06 06:47:29 crc kubenswrapper[4823]: I1206 06:47:29.950136 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.009262 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.011216 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.016110 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.020806 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.049775 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8db6f8d1-3006-4e17-a979-7777a0919c7e-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"8db6f8d1-3006-4e17-a979-7777a0919c7e\") " pod="openstack/watcher-api-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.049863 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8db6f8d1-3006-4e17-a979-7777a0919c7e-logs\") pod \"watcher-api-0\" (UID: \"8db6f8d1-3006-4e17-a979-7777a0919c7e\") " pod="openstack/watcher-api-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.049907 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.049978 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db6f8d1-3006-4e17-a979-7777a0919c7e-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"8db6f8d1-3006-4e17-a979-7777a0919c7e\") " pod="openstack/watcher-api-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.050023 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.050094 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-config-data\") pod \"watcher-decision-engine-0\" (UID: \"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.050142 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whfxs\" (UniqueName: \"kubernetes.io/projected/8db6f8d1-3006-4e17-a979-7777a0919c7e-kube-api-access-whfxs\") pod \"watcher-api-0\" (UID: \"8db6f8d1-3006-4e17-a979-7777a0919c7e\") " pod="openstack/watcher-api-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.050750 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-logs\") pod \"watcher-decision-engine-0\" (UID: \"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.050829 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8db6f8d1-3006-4e17-a979-7777a0919c7e-config-data\") pod \"watcher-api-0\" (UID: \"8db6f8d1-3006-4e17-a979-7777a0919c7e\") " pod="openstack/watcher-api-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.050983 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjzmv\" (UniqueName: \"kubernetes.io/projected/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-kube-api-access-tjzmv\") pod \"watcher-decision-engine-0\" (UID: \"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.058346 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-logs\") pod \"watcher-decision-engine-0\" (UID: \"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.058397 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8db6f8d1-3006-4e17-a979-7777a0919c7e-logs\") pod \"watcher-api-0\" (UID: \"8db6f8d1-3006-4e17-a979-7777a0919c7e\") " pod="openstack/watcher-api-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.067070 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db6f8d1-3006-4e17-a979-7777a0919c7e-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"8db6f8d1-3006-4e17-a979-7777a0919c7e\") " pod="openstack/watcher-api-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.069150 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.077201 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8db6f8d1-3006-4e17-a979-7777a0919c7e-config-data\") pod \"watcher-api-0\" (UID: \"8db6f8d1-3006-4e17-a979-7777a0919c7e\") " pod="openstack/watcher-api-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.077802 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-config-data\") pod \"watcher-decision-engine-0\" (UID: \"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.078622 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.090331 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whfxs\" (UniqueName: \"kubernetes.io/projected/8db6f8d1-3006-4e17-a979-7777a0919c7e-kube-api-access-whfxs\") pod \"watcher-api-0\" (UID: \"8db6f8d1-3006-4e17-a979-7777a0919c7e\") " pod="openstack/watcher-api-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.096684 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8db6f8d1-3006-4e17-a979-7777a0919c7e-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"8db6f8d1-3006-4e17-a979-7777a0919c7e\") " pod="openstack/watcher-api-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.104648 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjzmv\" (UniqueName: \"kubernetes.io/projected/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-kube-api-access-tjzmv\") pod \"watcher-decision-engine-0\" (UID: \"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.152220 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b8f909-82f7-4db2-872c-52810a5fb3ab-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"e1b8f909-82f7-4db2-872c-52810a5fb3ab\") " pod="openstack/watcher-applier-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.152283 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b8f909-82f7-4db2-872c-52810a5fb3ab-config-data\") pod \"watcher-applier-0\" (UID: \"e1b8f909-82f7-4db2-872c-52810a5fb3ab\") " pod="openstack/watcher-applier-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.152357 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfpdf\" (UniqueName: \"kubernetes.io/projected/e1b8f909-82f7-4db2-872c-52810a5fb3ab-kube-api-access-lfpdf\") pod \"watcher-applier-0\" (UID: \"e1b8f909-82f7-4db2-872c-52810a5fb3ab\") " pod="openstack/watcher-applier-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.152427 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1b8f909-82f7-4db2-872c-52810a5fb3ab-logs\") pod \"watcher-applier-0\" (UID: \"e1b8f909-82f7-4db2-872c-52810a5fb3ab\") " pod="openstack/watcher-applier-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.204859 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.246460 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.256587 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfpdf\" (UniqueName: \"kubernetes.io/projected/e1b8f909-82f7-4db2-872c-52810a5fb3ab-kube-api-access-lfpdf\") pod \"watcher-applier-0\" (UID: \"e1b8f909-82f7-4db2-872c-52810a5fb3ab\") " pod="openstack/watcher-applier-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.256765 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1b8f909-82f7-4db2-872c-52810a5fb3ab-logs\") pod \"watcher-applier-0\" (UID: \"e1b8f909-82f7-4db2-872c-52810a5fb3ab\") " pod="openstack/watcher-applier-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.256855 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b8f909-82f7-4db2-872c-52810a5fb3ab-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"e1b8f909-82f7-4db2-872c-52810a5fb3ab\") " pod="openstack/watcher-applier-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.256901 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b8f909-82f7-4db2-872c-52810a5fb3ab-config-data\") pod \"watcher-applier-0\" (UID: \"e1b8f909-82f7-4db2-872c-52810a5fb3ab\") " pod="openstack/watcher-applier-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.259505 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"335c336e-79ff-426e-a360-0c0ea58e8941","Type":"ContainerStarted","Data":"4ceef2af5cfaed2f862e81044aad73dbeef6768c74f17686033db1f69f407650"} Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.261979 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1b8f909-82f7-4db2-872c-52810a5fb3ab-logs\") pod \"watcher-applier-0\" (UID: \"e1b8f909-82f7-4db2-872c-52810a5fb3ab\") " pod="openstack/watcher-applier-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.273126 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b8f909-82f7-4db2-872c-52810a5fb3ab-config-data\") pod \"watcher-applier-0\" (UID: \"e1b8f909-82f7-4db2-872c-52810a5fb3ab\") " pod="openstack/watcher-applier-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.280165 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b8f909-82f7-4db2-872c-52810a5fb3ab-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"e1b8f909-82f7-4db2-872c-52810a5fb3ab\") " pod="openstack/watcher-applier-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.280299 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfpdf\" (UniqueName: \"kubernetes.io/projected/e1b8f909-82f7-4db2-872c-52810a5fb3ab-kube-api-access-lfpdf\") pod \"watcher-applier-0\" (UID: \"e1b8f909-82f7-4db2-872c-52810a5fb3ab\") " pod="openstack/watcher-applier-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.358127 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Dec 06 06:47:30 crc kubenswrapper[4823]: I1206 06:47:30.743282 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-769d66bc44-mzlht"] Dec 06 06:47:31 crc kubenswrapper[4823]: I1206 06:47:31.140204 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 06 06:47:31 crc kubenswrapper[4823]: I1206 06:47:31.340820 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24","Type":"ContainerStarted","Data":"d46bf047a86f22df7184a502a31f786f0a37dee87863d1b0faeec583fc90e8e8"} Dec 06 06:47:31 crc kubenswrapper[4823]: I1206 06:47:31.365918 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"335c336e-79ff-426e-a360-0c0ea58e8941","Type":"ContainerStarted","Data":"b92e557dd1017d4367e6dc8cb1e3339d81cc13eabe4589addaceb147bdfbc84a"} Dec 06 06:47:31 crc kubenswrapper[4823]: I1206 06:47:31.385931 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-769d66bc44-mzlht" event={"ID":"dc00597e-057e-4c1b-83aa-435c9e5184be","Type":"ContainerStarted","Data":"5d6257c7ac5fcf07f472849afd8f2cca90fda57dbd6108bdedae762ac73476a5"} Dec 06 06:47:31 crc kubenswrapper[4823]: I1206 06:47:31.529634 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Dec 06 06:47:31 crc kubenswrapper[4823]: I1206 06:47:31.896170 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Dec 06 06:47:32 crc kubenswrapper[4823]: I1206 06:47:32.401858 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"335c336e-79ff-426e-a360-0c0ea58e8941","Type":"ContainerStarted","Data":"30f6cedc50388165455a9dca871dcfdce66dfdf8313c8dd60f032bea7953b0f9"} Dec 06 06:47:32 crc kubenswrapper[4823]: I1206 06:47:32.404499 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-769d66bc44-mzlht" event={"ID":"dc00597e-057e-4c1b-83aa-435c9e5184be","Type":"ContainerStarted","Data":"3b342475a38b908e13df994f6f34cb67ad67018ec9b7e5b88052575895cb5080"} Dec 06 06:47:32 crc kubenswrapper[4823]: I1206 06:47:32.406933 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"8db6f8d1-3006-4e17-a979-7777a0919c7e","Type":"ContainerStarted","Data":"c6734a4efe1449491425566b65e13adaea3d60b6ad2eb5cf7fb173129d8f14f8"} Dec 06 06:47:32 crc kubenswrapper[4823]: I1206 06:47:32.406979 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"8db6f8d1-3006-4e17-a979-7777a0919c7e","Type":"ContainerStarted","Data":"99ae071c3ff352e4d0e258bd996b4fd860bb3895997540a872eb9ee4f486505a"} Dec 06 06:47:32 crc kubenswrapper[4823]: I1206 06:47:32.409311 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"e1b8f909-82f7-4db2-872c-52810a5fb3ab","Type":"ContainerStarted","Data":"324557c61ff8b613f57add7e42c90296b91f419fb398a5c7b3ff7ff42b67ba13"} Dec 06 06:47:33 crc kubenswrapper[4823]: I1206 06:47:33.425520 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-769d66bc44-mzlht" event={"ID":"dc00597e-057e-4c1b-83aa-435c9e5184be","Type":"ContainerStarted","Data":"b83c7f12f7d3271add2034ee25c3e8e039f4026eec428aa14c6789c90295c4d9"} Dec 06 06:47:33 crc kubenswrapper[4823]: I1206 06:47:33.426235 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:47:33 crc kubenswrapper[4823]: I1206 06:47:33.426252 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:47:33 crc kubenswrapper[4823]: I1206 06:47:33.429023 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"8db6f8d1-3006-4e17-a979-7777a0919c7e","Type":"ContainerStarted","Data":"cdca3b5f56c09f8ffa0a1b94a08ab68de64f8f38eefdd06ac4b3dbf1e6bb0077"} Dec 06 06:47:33 crc kubenswrapper[4823]: I1206 06:47:33.429526 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Dec 06 06:47:33 crc kubenswrapper[4823]: I1206 06:47:33.452701 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-769d66bc44-mzlht" podStartSLOduration=4.452676014 podStartE2EDuration="4.452676014s" podCreationTimestamp="2025-12-06 06:47:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:47:33.45251578 +0000 UTC m=+1354.738267760" watchObservedRunningTime="2025-12-06 06:47:33.452676014 +0000 UTC m=+1354.738427974" Dec 06 06:47:33 crc kubenswrapper[4823]: I1206 06:47:33.489875 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=4.489847378 podStartE2EDuration="4.489847378s" podCreationTimestamp="2025-12-06 06:47:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:47:33.480431316 +0000 UTC m=+1354.766183276" watchObservedRunningTime="2025-12-06 06:47:33.489847378 +0000 UTC m=+1354.775599338" Dec 06 06:47:33 crc kubenswrapper[4823]: I1206 06:47:33.635512 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:47:33 crc kubenswrapper[4823]: I1206 06:47:33.816320 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:47:35 crc kubenswrapper[4823]: I1206 06:47:35.247440 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Dec 06 06:47:35 crc kubenswrapper[4823]: I1206 06:47:35.465148 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 06:47:35 crc kubenswrapper[4823]: I1206 06:47:35.466269 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"e1b8f909-82f7-4db2-872c-52810a5fb3ab","Type":"ContainerStarted","Data":"ddcb9166fe08286b3b6c1371fbccd3881d17c66a5d200893b35bf09d38ec452a"} Dec 06 06:47:35 crc kubenswrapper[4823]: I1206 06:47:35.499621 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=3.692839958 podStartE2EDuration="6.49958804s" podCreationTimestamp="2025-12-06 06:47:29 +0000 UTC" firstStartedPulling="2025-12-06 06:47:31.908420262 +0000 UTC m=+1353.194172212" lastFinishedPulling="2025-12-06 06:47:34.715168334 +0000 UTC m=+1356.000920294" observedRunningTime="2025-12-06 06:47:35.49300903 +0000 UTC m=+1356.778760990" watchObservedRunningTime="2025-12-06 06:47:35.49958804 +0000 UTC m=+1356.785340000" Dec 06 06:47:36 crc kubenswrapper[4823]: I1206 06:47:36.052300 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:47:36 crc kubenswrapper[4823]: I1206 06:47:36.052397 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:47:36 crc kubenswrapper[4823]: I1206 06:47:36.194413 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:47:36 crc kubenswrapper[4823]: I1206 06:47:36.480839 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24","Type":"ContainerStarted","Data":"6e25610a99f6f361047fecd99d184be47e69038785c4aa0ac4fb0e5acb0c4398"} Dec 06 06:47:36 crc kubenswrapper[4823]: I1206 06:47:36.489899 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"335c336e-79ff-426e-a360-0c0ea58e8941","Type":"ContainerStarted","Data":"41aeecb1f5c3dec1c632880310f9cd74af0481cd9e260f9abb46d0bf63e3a807"} Dec 06 06:47:36 crc kubenswrapper[4823]: I1206 06:47:36.490022 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 06:47:36 crc kubenswrapper[4823]: I1206 06:47:36.490175 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 06:47:36 crc kubenswrapper[4823]: I1206 06:47:36.508214 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=3.88528698 podStartE2EDuration="7.508187864s" podCreationTimestamp="2025-12-06 06:47:29 +0000 UTC" firstStartedPulling="2025-12-06 06:47:31.092080304 +0000 UTC m=+1352.377832254" lastFinishedPulling="2025-12-06 06:47:34.714981178 +0000 UTC m=+1356.000733138" observedRunningTime="2025-12-06 06:47:36.504181148 +0000 UTC m=+1357.789933108" watchObservedRunningTime="2025-12-06 06:47:36.508187864 +0000 UTC m=+1357.793939824" Dec 06 06:47:36 crc kubenswrapper[4823]: I1206 06:47:36.532736 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.015727473 podStartE2EDuration="9.532713812s" podCreationTimestamp="2025-12-06 06:47:27 +0000 UTC" firstStartedPulling="2025-12-06 06:47:28.209937504 +0000 UTC m=+1349.495689464" lastFinishedPulling="2025-12-06 06:47:34.726923843 +0000 UTC m=+1356.012675803" observedRunningTime="2025-12-06 06:47:36.532683642 +0000 UTC m=+1357.818435602" watchObservedRunningTime="2025-12-06 06:47:36.532713812 +0000 UTC m=+1357.818465772" Dec 06 06:47:36 crc kubenswrapper[4823]: I1206 06:47:36.727419 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5dcc5c8c58-p6xlr" Dec 06 06:47:36 crc kubenswrapper[4823]: I1206 06:47:36.973226 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cdc5bf4b4-qft5r"] Dec 06 06:47:36 crc kubenswrapper[4823]: I1206 06:47:36.973610 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-cdc5bf4b4-qft5r" podUID="2bcc21a4-6b09-4804-86d5-85cc7f0267e7" containerName="horizon-log" containerID="cri-o://ef7e20b45fa10c9e0534bf0e943c77a7024e8c1acb561998214014561bcd023a" gracePeriod=30 Dec 06 06:47:36 crc kubenswrapper[4823]: I1206 06:47:36.973861 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-cdc5bf4b4-qft5r" podUID="2bcc21a4-6b09-4804-86d5-85cc7f0267e7" containerName="horizon" containerID="cri-o://a81160012932675fd601ecff4024d99b9f89d28f93683cfb6c8170e7604051be" gracePeriod=30 Dec 06 06:47:37 crc kubenswrapper[4823]: I1206 06:47:37.436773 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Dec 06 06:47:38 crc kubenswrapper[4823]: I1206 06:47:38.887321 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cdc5bf4b4-qft5r" podUID="2bcc21a4-6b09-4804-86d5-85cc7f0267e7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.157:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.157:8443: connect: connection refused" Dec 06 06:47:39 crc kubenswrapper[4823]: I1206 06:47:39.540204 4823 generic.go:334] "Generic (PLEG): container finished" podID="2bcc21a4-6b09-4804-86d5-85cc7f0267e7" containerID="a81160012932675fd601ecff4024d99b9f89d28f93683cfb6c8170e7604051be" exitCode=0 Dec 06 06:47:39 crc kubenswrapper[4823]: I1206 06:47:39.540332 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cdc5bf4b4-qft5r" event={"ID":"2bcc21a4-6b09-4804-86d5-85cc7f0267e7","Type":"ContainerDied","Data":"a81160012932675fd601ecff4024d99b9f89d28f93683cfb6c8170e7604051be"} Dec 06 06:47:40 crc kubenswrapper[4823]: I1206 06:47:40.205750 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 06 06:47:40 crc kubenswrapper[4823]: I1206 06:47:40.241723 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Dec 06 06:47:40 crc kubenswrapper[4823]: I1206 06:47:40.247743 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Dec 06 06:47:40 crc kubenswrapper[4823]: I1206 06:47:40.258767 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Dec 06 06:47:40 crc kubenswrapper[4823]: I1206 06:47:40.359127 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Dec 06 06:47:40 crc kubenswrapper[4823]: I1206 06:47:40.359932 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Dec 06 06:47:40 crc kubenswrapper[4823]: I1206 06:47:40.400823 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Dec 06 06:47:40 crc kubenswrapper[4823]: I1206 06:47:40.555356 4823 generic.go:334] "Generic (PLEG): container finished" podID="3d04e917-34c8-4df1-bc89-69ca7b7753ac" containerID="6fe2554afef990e2feeeb72cb985a1e77b66748179faa0dd0685ef6860aa0271" exitCode=0 Dec 06 06:47:40 crc kubenswrapper[4823]: I1206 06:47:40.555606 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x7fxv" event={"ID":"3d04e917-34c8-4df1-bc89-69ca7b7753ac","Type":"ContainerDied","Data":"6fe2554afef990e2feeeb72cb985a1e77b66748179faa0dd0685ef6860aa0271"} Dec 06 06:47:40 crc kubenswrapper[4823]: I1206 06:47:40.558233 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 06 06:47:40 crc kubenswrapper[4823]: I1206 06:47:40.575215 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Dec 06 06:47:40 crc kubenswrapper[4823]: I1206 06:47:40.591345 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Dec 06 06:47:40 crc kubenswrapper[4823]: I1206 06:47:40.611967 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Dec 06 06:47:42 crc kubenswrapper[4823]: I1206 06:47:42.080010 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x7fxv" Dec 06 06:47:42 crc kubenswrapper[4823]: I1206 06:47:42.189147 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plqv9\" (UniqueName: \"kubernetes.io/projected/3d04e917-34c8-4df1-bc89-69ca7b7753ac-kube-api-access-plqv9\") pod \"3d04e917-34c8-4df1-bc89-69ca7b7753ac\" (UID: \"3d04e917-34c8-4df1-bc89-69ca7b7753ac\") " Dec 06 06:47:42 crc kubenswrapper[4823]: I1206 06:47:42.189255 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3d04e917-34c8-4df1-bc89-69ca7b7753ac-db-sync-config-data\") pod \"3d04e917-34c8-4df1-bc89-69ca7b7753ac\" (UID: \"3d04e917-34c8-4df1-bc89-69ca7b7753ac\") " Dec 06 06:47:42 crc kubenswrapper[4823]: I1206 06:47:42.189554 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d04e917-34c8-4df1-bc89-69ca7b7753ac-combined-ca-bundle\") pod \"3d04e917-34c8-4df1-bc89-69ca7b7753ac\" (UID: \"3d04e917-34c8-4df1-bc89-69ca7b7753ac\") " Dec 06 06:47:42 crc kubenswrapper[4823]: I1206 06:47:42.205949 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d04e917-34c8-4df1-bc89-69ca7b7753ac-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3d04e917-34c8-4df1-bc89-69ca7b7753ac" (UID: "3d04e917-34c8-4df1-bc89-69ca7b7753ac"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:47:42 crc kubenswrapper[4823]: I1206 06:47:42.206761 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d04e917-34c8-4df1-bc89-69ca7b7753ac-kube-api-access-plqv9" (OuterVolumeSpecName: "kube-api-access-plqv9") pod "3d04e917-34c8-4df1-bc89-69ca7b7753ac" (UID: "3d04e917-34c8-4df1-bc89-69ca7b7753ac"). InnerVolumeSpecName "kube-api-access-plqv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:47:42 crc kubenswrapper[4823]: I1206 06:47:42.261516 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d04e917-34c8-4df1-bc89-69ca7b7753ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d04e917-34c8-4df1-bc89-69ca7b7753ac" (UID: "3d04e917-34c8-4df1-bc89-69ca7b7753ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:47:42 crc kubenswrapper[4823]: I1206 06:47:42.292027 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d04e917-34c8-4df1-bc89-69ca7b7753ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:42 crc kubenswrapper[4823]: I1206 06:47:42.292065 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plqv9\" (UniqueName: \"kubernetes.io/projected/3d04e917-34c8-4df1-bc89-69ca7b7753ac-kube-api-access-plqv9\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:42 crc kubenswrapper[4823]: I1206 06:47:42.292074 4823 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3d04e917-34c8-4df1-bc89-69ca7b7753ac-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:42 crc kubenswrapper[4823]: I1206 06:47:42.577173 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x7fxv" event={"ID":"3d04e917-34c8-4df1-bc89-69ca7b7753ac","Type":"ContainerDied","Data":"efbd19c82255fe05bb5d3a279a978671acf2d85316d8bcf93f2fcfe50558d88c"} Dec 06 06:47:42 crc kubenswrapper[4823]: I1206 06:47:42.577240 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efbd19c82255fe05bb5d3a279a978671acf2d85316d8bcf93f2fcfe50558d88c" Dec 06 06:47:42 crc kubenswrapper[4823]: I1206 06:47:42.577249 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x7fxv" Dec 06 06:47:42 crc kubenswrapper[4823]: I1206 06:47:42.887752 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-689f45894f-mlpws"] Dec 06 06:47:42 crc kubenswrapper[4823]: E1206 06:47:42.888395 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d04e917-34c8-4df1-bc89-69ca7b7753ac" containerName="barbican-db-sync" Dec 06 06:47:42 crc kubenswrapper[4823]: I1206 06:47:42.888441 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d04e917-34c8-4df1-bc89-69ca7b7753ac" containerName="barbican-db-sync" Dec 06 06:47:42 crc kubenswrapper[4823]: I1206 06:47:42.888744 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d04e917-34c8-4df1-bc89-69ca7b7753ac" containerName="barbican-db-sync" Dec 06 06:47:42 crc kubenswrapper[4823]: I1206 06:47:42.890259 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-689f45894f-mlpws" Dec 06 06:47:42 crc kubenswrapper[4823]: I1206 06:47:42.896179 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-dsznk" Dec 06 06:47:42 crc kubenswrapper[4823]: I1206 06:47:42.896556 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 06 06:47:42 crc kubenswrapper[4823]: I1206 06:47:42.897126 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 06 06:47:42 crc kubenswrapper[4823]: I1206 06:47:42.935449 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7fcb8dc678-hn4ms"] Dec 06 06:47:42 crc kubenswrapper[4823]: I1206 06:47:42.939166 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7fcb8dc678-hn4ms" Dec 06 06:47:42 crc kubenswrapper[4823]: I1206 06:47:42.945213 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 06 06:47:42 crc kubenswrapper[4823]: I1206 06:47:42.974555 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-689f45894f-mlpws"] Dec 06 06:47:42 crc kubenswrapper[4823]: I1206 06:47:42.986009 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7fcb8dc678-hn4ms"] Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.009462 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9-logs\") pod \"barbican-worker-689f45894f-mlpws\" (UID: \"76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9\") " pod="openstack/barbican-worker-689f45894f-mlpws" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.009519 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab1d7d34-2799-4553-9895-57c3c573cda2-config-data\") pod \"barbican-keystone-listener-7fcb8dc678-hn4ms\" (UID: \"ab1d7d34-2799-4553-9895-57c3c573cda2\") " pod="openstack/barbican-keystone-listener-7fcb8dc678-hn4ms" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.009573 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnnjm\" (UniqueName: \"kubernetes.io/projected/ab1d7d34-2799-4553-9895-57c3c573cda2-kube-api-access-xnnjm\") pod \"barbican-keystone-listener-7fcb8dc678-hn4ms\" (UID: \"ab1d7d34-2799-4553-9895-57c3c573cda2\") " pod="openstack/barbican-keystone-listener-7fcb8dc678-hn4ms" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.009627 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfrt9\" (UniqueName: \"kubernetes.io/projected/76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9-kube-api-access-rfrt9\") pod \"barbican-worker-689f45894f-mlpws\" (UID: \"76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9\") " pod="openstack/barbican-worker-689f45894f-mlpws" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.009686 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab1d7d34-2799-4553-9895-57c3c573cda2-logs\") pod \"barbican-keystone-listener-7fcb8dc678-hn4ms\" (UID: \"ab1d7d34-2799-4553-9895-57c3c573cda2\") " pod="openstack/barbican-keystone-listener-7fcb8dc678-hn4ms" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.009738 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab1d7d34-2799-4553-9895-57c3c573cda2-config-data-custom\") pod \"barbican-keystone-listener-7fcb8dc678-hn4ms\" (UID: \"ab1d7d34-2799-4553-9895-57c3c573cda2\") " pod="openstack/barbican-keystone-listener-7fcb8dc678-hn4ms" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.009789 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9-combined-ca-bundle\") pod \"barbican-worker-689f45894f-mlpws\" (UID: \"76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9\") " pod="openstack/barbican-worker-689f45894f-mlpws" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.009824 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1d7d34-2799-4553-9895-57c3c573cda2-combined-ca-bundle\") pod \"barbican-keystone-listener-7fcb8dc678-hn4ms\" (UID: \"ab1d7d34-2799-4553-9895-57c3c573cda2\") " pod="openstack/barbican-keystone-listener-7fcb8dc678-hn4ms" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.009847 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9-config-data\") pod \"barbican-worker-689f45894f-mlpws\" (UID: \"76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9\") " pod="openstack/barbican-worker-689f45894f-mlpws" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.009894 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9-config-data-custom\") pod \"barbican-worker-689f45894f-mlpws\" (UID: \"76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9\") " pod="openstack/barbican-worker-689f45894f-mlpws" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.114850 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-df895c6d9-tzwbz"] Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.121013 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.122073 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9-config-data-custom\") pod \"barbican-worker-689f45894f-mlpws\" (UID: \"76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9\") " pod="openstack/barbican-worker-689f45894f-mlpws" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.122319 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9-logs\") pod \"barbican-worker-689f45894f-mlpws\" (UID: \"76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9\") " pod="openstack/barbican-worker-689f45894f-mlpws" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.122419 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab1d7d34-2799-4553-9895-57c3c573cda2-config-data\") pod \"barbican-keystone-listener-7fcb8dc678-hn4ms\" (UID: \"ab1d7d34-2799-4553-9895-57c3c573cda2\") " pod="openstack/barbican-keystone-listener-7fcb8dc678-hn4ms" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.122573 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnnjm\" (UniqueName: \"kubernetes.io/projected/ab1d7d34-2799-4553-9895-57c3c573cda2-kube-api-access-xnnjm\") pod \"barbican-keystone-listener-7fcb8dc678-hn4ms\" (UID: \"ab1d7d34-2799-4553-9895-57c3c573cda2\") " pod="openstack/barbican-keystone-listener-7fcb8dc678-hn4ms" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.122765 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfrt9\" (UniqueName: \"kubernetes.io/projected/76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9-kube-api-access-rfrt9\") pod \"barbican-worker-689f45894f-mlpws\" (UID: \"76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9\") " pod="openstack/barbican-worker-689f45894f-mlpws" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.124188 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab1d7d34-2799-4553-9895-57c3c573cda2-logs\") pod \"barbican-keystone-listener-7fcb8dc678-hn4ms\" (UID: \"ab1d7d34-2799-4553-9895-57c3c573cda2\") " pod="openstack/barbican-keystone-listener-7fcb8dc678-hn4ms" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.125635 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab1d7d34-2799-4553-9895-57c3c573cda2-config-data-custom\") pod \"barbican-keystone-listener-7fcb8dc678-hn4ms\" (UID: \"ab1d7d34-2799-4553-9895-57c3c573cda2\") " pod="openstack/barbican-keystone-listener-7fcb8dc678-hn4ms" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.141130 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9-combined-ca-bundle\") pod \"barbican-worker-689f45894f-mlpws\" (UID: \"76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9\") " pod="openstack/barbican-worker-689f45894f-mlpws" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.141242 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1d7d34-2799-4553-9895-57c3c573cda2-combined-ca-bundle\") pod \"barbican-keystone-listener-7fcb8dc678-hn4ms\" (UID: \"ab1d7d34-2799-4553-9895-57c3c573cda2\") " pod="openstack/barbican-keystone-listener-7fcb8dc678-hn4ms" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.141292 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9-config-data\") pod \"barbican-worker-689f45894f-mlpws\" (UID: \"76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9\") " pod="openstack/barbican-worker-689f45894f-mlpws" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.141561 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab1d7d34-2799-4553-9895-57c3c573cda2-config-data\") pod \"barbican-keystone-listener-7fcb8dc678-hn4ms\" (UID: \"ab1d7d34-2799-4553-9895-57c3c573cda2\") " pod="openstack/barbican-keystone-listener-7fcb8dc678-hn4ms" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.125750 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab1d7d34-2799-4553-9895-57c3c573cda2-logs\") pod \"barbican-keystone-listener-7fcb8dc678-hn4ms\" (UID: \"ab1d7d34-2799-4553-9895-57c3c573cda2\") " pod="openstack/barbican-keystone-listener-7fcb8dc678-hn4ms" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.126811 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9-logs\") pod \"barbican-worker-689f45894f-mlpws\" (UID: \"76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9\") " pod="openstack/barbican-worker-689f45894f-mlpws" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.152284 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9-config-data-custom\") pod \"barbican-worker-689f45894f-mlpws\" (UID: \"76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9\") " pod="openstack/barbican-worker-689f45894f-mlpws" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.138045 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab1d7d34-2799-4553-9895-57c3c573cda2-config-data-custom\") pod \"barbican-keystone-listener-7fcb8dc678-hn4ms\" (UID: \"ab1d7d34-2799-4553-9895-57c3c573cda2\") " pod="openstack/barbican-keystone-listener-7fcb8dc678-hn4ms" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.182336 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9-combined-ca-bundle\") pod \"barbican-worker-689f45894f-mlpws\" (UID: \"76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9\") " pod="openstack/barbican-worker-689f45894f-mlpws" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.200568 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1d7d34-2799-4553-9895-57c3c573cda2-combined-ca-bundle\") pod \"barbican-keystone-listener-7fcb8dc678-hn4ms\" (UID: \"ab1d7d34-2799-4553-9895-57c3c573cda2\") " pod="openstack/barbican-keystone-listener-7fcb8dc678-hn4ms" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.209798 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnnjm\" (UniqueName: \"kubernetes.io/projected/ab1d7d34-2799-4553-9895-57c3c573cda2-kube-api-access-xnnjm\") pod \"barbican-keystone-listener-7fcb8dc678-hn4ms\" (UID: \"ab1d7d34-2799-4553-9895-57c3c573cda2\") " pod="openstack/barbican-keystone-listener-7fcb8dc678-hn4ms" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.210486 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9-config-data\") pod \"barbican-worker-689f45894f-mlpws\" (UID: \"76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9\") " pod="openstack/barbican-worker-689f45894f-mlpws" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.260090 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfrt9\" (UniqueName: \"kubernetes.io/projected/76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9-kube-api-access-rfrt9\") pod \"barbican-worker-689f45894f-mlpws\" (UID: \"76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9\") " pod="openstack/barbican-worker-689f45894f-mlpws" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.279239 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-df895c6d9-tzwbz"] Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.284050 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-ovsdbserver-nb\") pod \"dnsmasq-dns-df895c6d9-tzwbz\" (UID: \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\") " pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.284199 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-config\") pod \"dnsmasq-dns-df895c6d9-tzwbz\" (UID: \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\") " pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.284464 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzrwz\" (UniqueName: \"kubernetes.io/projected/cfa91e4a-7dd0-410e-9915-5ebc0c265902-kube-api-access-mzrwz\") pod \"dnsmasq-dns-df895c6d9-tzwbz\" (UID: \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\") " pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.284797 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-dns-swift-storage-0\") pod \"dnsmasq-dns-df895c6d9-tzwbz\" (UID: \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\") " pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.284826 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-dns-svc\") pod \"dnsmasq-dns-df895c6d9-tzwbz\" (UID: \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\") " pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.284860 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-ovsdbserver-sb\") pod \"dnsmasq-dns-df895c6d9-tzwbz\" (UID: \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\") " pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.308789 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7fcb8dc678-hn4ms" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.321740 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5d5795f4fd-qb9w4"] Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.323613 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d5795f4fd-qb9w4" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.326769 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.355913 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5d5795f4fd-qb9w4"] Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.387880 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-ovsdbserver-nb\") pod \"dnsmasq-dns-df895c6d9-tzwbz\" (UID: \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\") " pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.387996 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-config\") pod \"dnsmasq-dns-df895c6d9-tzwbz\" (UID: \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\") " pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.388167 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzrwz\" (UniqueName: \"kubernetes.io/projected/cfa91e4a-7dd0-410e-9915-5ebc0c265902-kube-api-access-mzrwz\") pod \"dnsmasq-dns-df895c6d9-tzwbz\" (UID: \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\") " pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.388380 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-dns-swift-storage-0\") pod \"dnsmasq-dns-df895c6d9-tzwbz\" (UID: \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\") " pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.388466 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-dns-svc\") pod \"dnsmasq-dns-df895c6d9-tzwbz\" (UID: \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\") " pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.388832 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-ovsdbserver-sb\") pod \"dnsmasq-dns-df895c6d9-tzwbz\" (UID: \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\") " pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.389194 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-ovsdbserver-nb\") pod \"dnsmasq-dns-df895c6d9-tzwbz\" (UID: \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\") " pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.389452 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-config\") pod \"dnsmasq-dns-df895c6d9-tzwbz\" (UID: \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\") " pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.390799 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-dns-swift-storage-0\") pod \"dnsmasq-dns-df895c6d9-tzwbz\" (UID: \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\") " pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.391332 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-dns-svc\") pod \"dnsmasq-dns-df895c6d9-tzwbz\" (UID: \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\") " pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.397354 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-ovsdbserver-sb\") pod \"dnsmasq-dns-df895c6d9-tzwbz\" (UID: \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\") " pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.419277 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzrwz\" (UniqueName: \"kubernetes.io/projected/cfa91e4a-7dd0-410e-9915-5ebc0c265902-kube-api-access-mzrwz\") pod \"dnsmasq-dns-df895c6d9-tzwbz\" (UID: \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\") " pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.490529 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9bceb03c-e3de-4bfd-b163-69cde861ce00-config-data-custom\") pod \"barbican-api-5d5795f4fd-qb9w4\" (UID: \"9bceb03c-e3de-4bfd-b163-69cde861ce00\") " pod="openstack/barbican-api-5d5795f4fd-qb9w4" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.490581 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6whr\" (UniqueName: \"kubernetes.io/projected/9bceb03c-e3de-4bfd-b163-69cde861ce00-kube-api-access-n6whr\") pod \"barbican-api-5d5795f4fd-qb9w4\" (UID: \"9bceb03c-e3de-4bfd-b163-69cde861ce00\") " pod="openstack/barbican-api-5d5795f4fd-qb9w4" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.490618 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bceb03c-e3de-4bfd-b163-69cde861ce00-combined-ca-bundle\") pod \"barbican-api-5d5795f4fd-qb9w4\" (UID: \"9bceb03c-e3de-4bfd-b163-69cde861ce00\") " pod="openstack/barbican-api-5d5795f4fd-qb9w4" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.490675 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bceb03c-e3de-4bfd-b163-69cde861ce00-logs\") pod \"barbican-api-5d5795f4fd-qb9w4\" (UID: \"9bceb03c-e3de-4bfd-b163-69cde861ce00\") " pod="openstack/barbican-api-5d5795f4fd-qb9w4" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.490861 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bceb03c-e3de-4bfd-b163-69cde861ce00-config-data\") pod \"barbican-api-5d5795f4fd-qb9w4\" (UID: \"9bceb03c-e3de-4bfd-b163-69cde861ce00\") " pod="openstack/barbican-api-5d5795f4fd-qb9w4" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.531370 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-689f45894f-mlpws" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.594135 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9bceb03c-e3de-4bfd-b163-69cde861ce00-config-data-custom\") pod \"barbican-api-5d5795f4fd-qb9w4\" (UID: \"9bceb03c-e3de-4bfd-b163-69cde861ce00\") " pod="openstack/barbican-api-5d5795f4fd-qb9w4" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.594257 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6whr\" (UniqueName: \"kubernetes.io/projected/9bceb03c-e3de-4bfd-b163-69cde861ce00-kube-api-access-n6whr\") pod \"barbican-api-5d5795f4fd-qb9w4\" (UID: \"9bceb03c-e3de-4bfd-b163-69cde861ce00\") " pod="openstack/barbican-api-5d5795f4fd-qb9w4" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.594302 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bceb03c-e3de-4bfd-b163-69cde861ce00-combined-ca-bundle\") pod \"barbican-api-5d5795f4fd-qb9w4\" (UID: \"9bceb03c-e3de-4bfd-b163-69cde861ce00\") " pod="openstack/barbican-api-5d5795f4fd-qb9w4" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.594348 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bceb03c-e3de-4bfd-b163-69cde861ce00-logs\") pod \"barbican-api-5d5795f4fd-qb9w4\" (UID: \"9bceb03c-e3de-4bfd-b163-69cde861ce00\") " pod="openstack/barbican-api-5d5795f4fd-qb9w4" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.594429 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bceb03c-e3de-4bfd-b163-69cde861ce00-config-data\") pod \"barbican-api-5d5795f4fd-qb9w4\" (UID: \"9bceb03c-e3de-4bfd-b163-69cde861ce00\") " pod="openstack/barbican-api-5d5795f4fd-qb9w4" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.600627 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bceb03c-e3de-4bfd-b163-69cde861ce00-logs\") pod \"barbican-api-5d5795f4fd-qb9w4\" (UID: \"9bceb03c-e3de-4bfd-b163-69cde861ce00\") " pod="openstack/barbican-api-5d5795f4fd-qb9w4" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.604279 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9bceb03c-e3de-4bfd-b163-69cde861ce00-config-data-custom\") pod \"barbican-api-5d5795f4fd-qb9w4\" (UID: \"9bceb03c-e3de-4bfd-b163-69cde861ce00\") " pod="openstack/barbican-api-5d5795f4fd-qb9w4" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.615776 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bceb03c-e3de-4bfd-b163-69cde861ce00-combined-ca-bundle\") pod \"barbican-api-5d5795f4fd-qb9w4\" (UID: \"9bceb03c-e3de-4bfd-b163-69cde861ce00\") " pod="openstack/barbican-api-5d5795f4fd-qb9w4" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.616124 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bceb03c-e3de-4bfd-b163-69cde861ce00-config-data\") pod \"barbican-api-5d5795f4fd-qb9w4\" (UID: \"9bceb03c-e3de-4bfd-b163-69cde861ce00\") " pod="openstack/barbican-api-5d5795f4fd-qb9w4" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.637240 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6whr\" (UniqueName: \"kubernetes.io/projected/9bceb03c-e3de-4bfd-b163-69cde861ce00-kube-api-access-n6whr\") pod \"barbican-api-5d5795f4fd-qb9w4\" (UID: \"9bceb03c-e3de-4bfd-b163-69cde861ce00\") " pod="openstack/barbican-api-5d5795f4fd-qb9w4" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.650381 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.790612 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d5795f4fd-qb9w4" Dec 06 06:47:43 crc kubenswrapper[4823]: I1206 06:47:43.919880 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7fcb8dc678-hn4ms"] Dec 06 06:47:43 crc kubenswrapper[4823]: W1206 06:47:43.929354 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab1d7d34_2799_4553_9895_57c3c573cda2.slice/crio-3e9863a808286397630a01139e78947957fd78d4056e9615cfd760fb27c4a050 WatchSource:0}: Error finding container 3e9863a808286397630a01139e78947957fd78d4056e9615cfd760fb27c4a050: Status 404 returned error can't find the container with id 3e9863a808286397630a01139e78947957fd78d4056e9615cfd760fb27c4a050 Dec 06 06:47:44 crc kubenswrapper[4823]: I1206 06:47:44.226050 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-689f45894f-mlpws"] Dec 06 06:47:44 crc kubenswrapper[4823]: W1206 06:47:44.271118 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76e106cf_a3c3_4af1_a57e_6fd0bcfb56f9.slice/crio-0918095cd30239a6a85bf517b83cd82dd17477440624d9a0b18a526c3947a6c4 WatchSource:0}: Error finding container 0918095cd30239a6a85bf517b83cd82dd17477440624d9a0b18a526c3947a6c4: Status 404 returned error can't find the container with id 0918095cd30239a6a85bf517b83cd82dd17477440624d9a0b18a526c3947a6c4 Dec 06 06:47:44 crc kubenswrapper[4823]: I1206 06:47:44.517649 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-df895c6d9-tzwbz"] Dec 06 06:47:44 crc kubenswrapper[4823]: I1206 06:47:44.630465 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-689f45894f-mlpws" event={"ID":"76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9","Type":"ContainerStarted","Data":"0918095cd30239a6a85bf517b83cd82dd17477440624d9a0b18a526c3947a6c4"} Dec 06 06:47:44 crc kubenswrapper[4823]: I1206 06:47:44.640557 4823 generic.go:334] "Generic (PLEG): container finished" podID="738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" containerID="6e25610a99f6f361047fecd99d184be47e69038785c4aa0ac4fb0e5acb0c4398" exitCode=1 Dec 06 06:47:44 crc kubenswrapper[4823]: I1206 06:47:44.640648 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24","Type":"ContainerDied","Data":"6e25610a99f6f361047fecd99d184be47e69038785c4aa0ac4fb0e5acb0c4398"} Dec 06 06:47:44 crc kubenswrapper[4823]: I1206 06:47:44.641254 4823 scope.go:117] "RemoveContainer" containerID="6e25610a99f6f361047fecd99d184be47e69038785c4aa0ac4fb0e5acb0c4398" Dec 06 06:47:44 crc kubenswrapper[4823]: I1206 06:47:44.642413 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5d5795f4fd-qb9w4"] Dec 06 06:47:44 crc kubenswrapper[4823]: I1206 06:47:44.655219 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7fcb8dc678-hn4ms" event={"ID":"ab1d7d34-2799-4553-9895-57c3c573cda2","Type":"ContainerStarted","Data":"3e9863a808286397630a01139e78947957fd78d4056e9615cfd760fb27c4a050"} Dec 06 06:47:44 crc kubenswrapper[4823]: I1206 06:47:44.659192 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" event={"ID":"cfa91e4a-7dd0-410e-9915-5ebc0c265902","Type":"ContainerStarted","Data":"9bc118ee9592320d053b3d2be95d665884affdab223fcf3e40ac1eb8e034f488"} Dec 06 06:47:45 crc kubenswrapper[4823]: I1206 06:47:45.681585 4823 generic.go:334] "Generic (PLEG): container finished" podID="cfa91e4a-7dd0-410e-9915-5ebc0c265902" containerID="6052b88190ba73ac116c3b86d33e1c39c030adb187178caca1248356c664c907" exitCode=0 Dec 06 06:47:45 crc kubenswrapper[4823]: I1206 06:47:45.681733 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" event={"ID":"cfa91e4a-7dd0-410e-9915-5ebc0c265902","Type":"ContainerDied","Data":"6052b88190ba73ac116c3b86d33e1c39c030adb187178caca1248356c664c907"} Dec 06 06:47:45 crc kubenswrapper[4823]: I1206 06:47:45.686776 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d5795f4fd-qb9w4" event={"ID":"9bceb03c-e3de-4bfd-b163-69cde861ce00","Type":"ContainerStarted","Data":"9184cb05953d1a426eb45a5642e06bd4b7651e1b2717dc0488c41a48546bc559"} Dec 06 06:47:45 crc kubenswrapper[4823]: I1206 06:47:45.686830 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d5795f4fd-qb9w4" event={"ID":"9bceb03c-e3de-4bfd-b163-69cde861ce00","Type":"ContainerStarted","Data":"ca4533920163b383473cd1971a80d407da0e40d88b9bc268c6b16e0d655056e1"} Dec 06 06:47:45 crc kubenswrapper[4823]: I1206 06:47:45.686843 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d5795f4fd-qb9w4" event={"ID":"9bceb03c-e3de-4bfd-b163-69cde861ce00","Type":"ContainerStarted","Data":"20cecb6fd82cdb62a2972286e41902a0441925ebfcb6adafb66187caa0f6c193"} Dec 06 06:47:45 crc kubenswrapper[4823]: I1206 06:47:45.686954 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d5795f4fd-qb9w4" Dec 06 06:47:45 crc kubenswrapper[4823]: I1206 06:47:45.686999 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d5795f4fd-qb9w4" Dec 06 06:47:45 crc kubenswrapper[4823]: I1206 06:47:45.695750 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24","Type":"ContainerStarted","Data":"e331f4421044ffd6bb90b95a39cce22e9c826aec0947cf1211ff68f01deaa4f1"} Dec 06 06:47:45 crc kubenswrapper[4823]: I1206 06:47:45.780618 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5d5795f4fd-qb9w4" podStartSLOduration=2.780593232 podStartE2EDuration="2.780593232s" podCreationTimestamp="2025-12-06 06:47:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:47:45.772289002 +0000 UTC m=+1367.058040962" watchObservedRunningTime="2025-12-06 06:47:45.780593232 +0000 UTC m=+1367.066345192" Dec 06 06:47:46 crc kubenswrapper[4823]: I1206 06:47:46.735723 4823 generic.go:334] "Generic (PLEG): container finished" podID="157d2d95-42a3-4f80-8c1d-b8c27bee49be" containerID="ccbc6492c4baaefac97b9f89624d19954c90cc49ec1420e482bc5e684f82b122" exitCode=0 Dec 06 06:47:46 crc kubenswrapper[4823]: I1206 06:47:46.739521 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kls2x" event={"ID":"157d2d95-42a3-4f80-8c1d-b8c27bee49be","Type":"ContainerDied","Data":"ccbc6492c4baaefac97b9f89624d19954c90cc49ec1420e482bc5e684f82b122"} Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.580746 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-576979cb46-vpljd"] Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.583580 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.588580 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.588641 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.607423 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-576979cb46-vpljd"] Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.631260 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d367d201-b052-4399-999b-a10e9b8a515f-public-tls-certs\") pod \"barbican-api-576979cb46-vpljd\" (UID: \"d367d201-b052-4399-999b-a10e9b8a515f\") " pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.631606 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d367d201-b052-4399-999b-a10e9b8a515f-combined-ca-bundle\") pod \"barbican-api-576979cb46-vpljd\" (UID: \"d367d201-b052-4399-999b-a10e9b8a515f\") " pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.631811 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d367d201-b052-4399-999b-a10e9b8a515f-logs\") pod \"barbican-api-576979cb46-vpljd\" (UID: \"d367d201-b052-4399-999b-a10e9b8a515f\") " pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.631869 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxhv5\" (UniqueName: \"kubernetes.io/projected/d367d201-b052-4399-999b-a10e9b8a515f-kube-api-access-jxhv5\") pod \"barbican-api-576979cb46-vpljd\" (UID: \"d367d201-b052-4399-999b-a10e9b8a515f\") " pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.631900 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d367d201-b052-4399-999b-a10e9b8a515f-internal-tls-certs\") pod \"barbican-api-576979cb46-vpljd\" (UID: \"d367d201-b052-4399-999b-a10e9b8a515f\") " pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.631970 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d367d201-b052-4399-999b-a10e9b8a515f-config-data-custom\") pod \"barbican-api-576979cb46-vpljd\" (UID: \"d367d201-b052-4399-999b-a10e9b8a515f\") " pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.632058 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d367d201-b052-4399-999b-a10e9b8a515f-config-data\") pod \"barbican-api-576979cb46-vpljd\" (UID: \"d367d201-b052-4399-999b-a10e9b8a515f\") " pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.733871 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxhv5\" (UniqueName: \"kubernetes.io/projected/d367d201-b052-4399-999b-a10e9b8a515f-kube-api-access-jxhv5\") pod \"barbican-api-576979cb46-vpljd\" (UID: \"d367d201-b052-4399-999b-a10e9b8a515f\") " pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.733943 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d367d201-b052-4399-999b-a10e9b8a515f-internal-tls-certs\") pod \"barbican-api-576979cb46-vpljd\" (UID: \"d367d201-b052-4399-999b-a10e9b8a515f\") " pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.734011 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d367d201-b052-4399-999b-a10e9b8a515f-config-data-custom\") pod \"barbican-api-576979cb46-vpljd\" (UID: \"d367d201-b052-4399-999b-a10e9b8a515f\") " pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.734041 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d367d201-b052-4399-999b-a10e9b8a515f-config-data\") pod \"barbican-api-576979cb46-vpljd\" (UID: \"d367d201-b052-4399-999b-a10e9b8a515f\") " pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.734120 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d367d201-b052-4399-999b-a10e9b8a515f-public-tls-certs\") pod \"barbican-api-576979cb46-vpljd\" (UID: \"d367d201-b052-4399-999b-a10e9b8a515f\") " pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.734176 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d367d201-b052-4399-999b-a10e9b8a515f-combined-ca-bundle\") pod \"barbican-api-576979cb46-vpljd\" (UID: \"d367d201-b052-4399-999b-a10e9b8a515f\") " pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.734242 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d367d201-b052-4399-999b-a10e9b8a515f-logs\") pod \"barbican-api-576979cb46-vpljd\" (UID: \"d367d201-b052-4399-999b-a10e9b8a515f\") " pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.734792 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d367d201-b052-4399-999b-a10e9b8a515f-logs\") pod \"barbican-api-576979cb46-vpljd\" (UID: \"d367d201-b052-4399-999b-a10e9b8a515f\") " pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.741513 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d367d201-b052-4399-999b-a10e9b8a515f-public-tls-certs\") pod \"barbican-api-576979cb46-vpljd\" (UID: \"d367d201-b052-4399-999b-a10e9b8a515f\") " pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.741609 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d367d201-b052-4399-999b-a10e9b8a515f-internal-tls-certs\") pod \"barbican-api-576979cb46-vpljd\" (UID: \"d367d201-b052-4399-999b-a10e9b8a515f\") " pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.745790 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d367d201-b052-4399-999b-a10e9b8a515f-combined-ca-bundle\") pod \"barbican-api-576979cb46-vpljd\" (UID: \"d367d201-b052-4399-999b-a10e9b8a515f\") " pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.756511 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d367d201-b052-4399-999b-a10e9b8a515f-config-data-custom\") pod \"barbican-api-576979cb46-vpljd\" (UID: \"d367d201-b052-4399-999b-a10e9b8a515f\") " pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.757702 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxhv5\" (UniqueName: \"kubernetes.io/projected/d367d201-b052-4399-999b-a10e9b8a515f-kube-api-access-jxhv5\") pod \"barbican-api-576979cb46-vpljd\" (UID: \"d367d201-b052-4399-999b-a10e9b8a515f\") " pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.757722 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d367d201-b052-4399-999b-a10e9b8a515f-config-data\") pod \"barbican-api-576979cb46-vpljd\" (UID: \"d367d201-b052-4399-999b-a10e9b8a515f\") " pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:47:47 crc kubenswrapper[4823]: I1206 06:47:47.902549 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.209645 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kls2x" Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.350723 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84hxh\" (UniqueName: \"kubernetes.io/projected/157d2d95-42a3-4f80-8c1d-b8c27bee49be-kube-api-access-84hxh\") pod \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\" (UID: \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\") " Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.350800 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/157d2d95-42a3-4f80-8c1d-b8c27bee49be-config-data\") pod \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\" (UID: \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\") " Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.351055 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/157d2d95-42a3-4f80-8c1d-b8c27bee49be-db-sync-config-data\") pod \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\" (UID: \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\") " Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.351158 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/157d2d95-42a3-4f80-8c1d-b8c27bee49be-scripts\") pod \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\" (UID: \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\") " Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.351186 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/157d2d95-42a3-4f80-8c1d-b8c27bee49be-etc-machine-id\") pod \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\" (UID: \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\") " Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.351213 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/157d2d95-42a3-4f80-8c1d-b8c27bee49be-combined-ca-bundle\") pod \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\" (UID: \"157d2d95-42a3-4f80-8c1d-b8c27bee49be\") " Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.354428 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/157d2d95-42a3-4f80-8c1d-b8c27bee49be-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "157d2d95-42a3-4f80-8c1d-b8c27bee49be" (UID: "157d2d95-42a3-4f80-8c1d-b8c27bee49be"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.361166 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/157d2d95-42a3-4f80-8c1d-b8c27bee49be-kube-api-access-84hxh" (OuterVolumeSpecName: "kube-api-access-84hxh") pod "157d2d95-42a3-4f80-8c1d-b8c27bee49be" (UID: "157d2d95-42a3-4f80-8c1d-b8c27bee49be"). InnerVolumeSpecName "kube-api-access-84hxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.361985 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/157d2d95-42a3-4f80-8c1d-b8c27bee49be-scripts" (OuterVolumeSpecName: "scripts") pod "157d2d95-42a3-4f80-8c1d-b8c27bee49be" (UID: "157d2d95-42a3-4f80-8c1d-b8c27bee49be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.377070 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/157d2d95-42a3-4f80-8c1d-b8c27bee49be-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "157d2d95-42a3-4f80-8c1d-b8c27bee49be" (UID: "157d2d95-42a3-4f80-8c1d-b8c27bee49be"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.429350 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/157d2d95-42a3-4f80-8c1d-b8c27bee49be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "157d2d95-42a3-4f80-8c1d-b8c27bee49be" (UID: "157d2d95-42a3-4f80-8c1d-b8c27bee49be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.454013 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84hxh\" (UniqueName: \"kubernetes.io/projected/157d2d95-42a3-4f80-8c1d-b8c27bee49be-kube-api-access-84hxh\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.454056 4823 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/157d2d95-42a3-4f80-8c1d-b8c27bee49be-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.454068 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/157d2d95-42a3-4f80-8c1d-b8c27bee49be-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.454081 4823 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/157d2d95-42a3-4f80-8c1d-b8c27bee49be-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.454094 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/157d2d95-42a3-4f80-8c1d-b8c27bee49be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.472208 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/157d2d95-42a3-4f80-8c1d-b8c27bee49be-config-data" (OuterVolumeSpecName: "config-data") pod "157d2d95-42a3-4f80-8c1d-b8c27bee49be" (UID: "157d2d95-42a3-4f80-8c1d-b8c27bee49be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.551052 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-576979cb46-vpljd"] Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.555807 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/157d2d95-42a3-4f80-8c1d-b8c27bee49be-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:48 crc kubenswrapper[4823]: W1206 06:47:48.567399 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd367d201_b052_4399_999b_a10e9b8a515f.slice/crio-8e9ac3f69cbbfb7e98009f38d88f0700d50b65176946eba61fd329402211281c WatchSource:0}: Error finding container 8e9ac3f69cbbfb7e98009f38d88f0700d50b65176946eba61fd329402211281c: Status 404 returned error can't find the container with id 8e9ac3f69cbbfb7e98009f38d88f0700d50b65176946eba61fd329402211281c Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.766172 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-689f45894f-mlpws" event={"ID":"76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9","Type":"ContainerStarted","Data":"fedbc952249af6d74e29e79280527efbe8c171f11a45ab120a3025ff514a79f4"} Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.768308 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7fcb8dc678-hn4ms" event={"ID":"ab1d7d34-2799-4553-9895-57c3c573cda2","Type":"ContainerStarted","Data":"37f750d729d0a4ac49fa1cc2326ddb3deef759d8e38640778ddf67826a439f78"} Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.773445 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kls2x" event={"ID":"157d2d95-42a3-4f80-8c1d-b8c27bee49be","Type":"ContainerDied","Data":"e441d168657a75267a532efa90115be581802509a700ac48db39a9349a69e2c2"} Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.773509 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e441d168657a75267a532efa90115be581802509a700ac48db39a9349a69e2c2" Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.773590 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kls2x" Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.779400 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" event={"ID":"cfa91e4a-7dd0-410e-9915-5ebc0c265902","Type":"ContainerStarted","Data":"f4477e942fb09f95f587ddabcdfd2a5d718c87c8a7d23bd54ff947c66e2536f9"} Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.779854 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.782515 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-576979cb46-vpljd" event={"ID":"d367d201-b052-4399-999b-a10e9b8a515f","Type":"ContainerStarted","Data":"8e9ac3f69cbbfb7e98009f38d88f0700d50b65176946eba61fd329402211281c"} Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.833467 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" podStartSLOduration=6.833440992 podStartE2EDuration="6.833440992s" podCreationTimestamp="2025-12-06 06:47:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:47:48.826761759 +0000 UTC m=+1370.112513739" watchObservedRunningTime="2025-12-06 06:47:48.833440992 +0000 UTC m=+1370.119192952" Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.884738 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.885121 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="8db6f8d1-3006-4e17-a979-7777a0919c7e" containerName="watcher-api-log" containerID="cri-o://c6734a4efe1449491425566b65e13adaea3d60b6ad2eb5cf7fb173129d8f14f8" gracePeriod=30 Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.885965 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="8db6f8d1-3006-4e17-a979-7777a0919c7e" containerName="watcher-api" containerID="cri-o://cdca3b5f56c09f8ffa0a1b94a08ab68de64f8f38eefdd06ac4b3dbf1e6bb0077" gracePeriod=30 Dec 06 06:47:48 crc kubenswrapper[4823]: I1206 06:47:48.887031 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cdc5bf4b4-qft5r" podUID="2bcc21a4-6b09-4804-86d5-85cc7f0267e7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.157:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.157:8443: connect: connection refused" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.421946 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 06:47:49 crc kubenswrapper[4823]: E1206 06:47:49.432533 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="157d2d95-42a3-4f80-8c1d-b8c27bee49be" containerName="cinder-db-sync" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.432587 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="157d2d95-42a3-4f80-8c1d-b8c27bee49be" containerName="cinder-db-sync" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.433552 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="157d2d95-42a3-4f80-8c1d-b8c27bee49be" containerName="cinder-db-sync" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.435601 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.479852 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.480171 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.480321 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.481074 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2cwbd" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.505800 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.534725 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6aa2f666-1f1f-4930-a728-81b27d74a0f8-scripts\") pod \"cinder-scheduler-0\" (UID: \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\") " pod="openstack/cinder-scheduler-0" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.534907 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6aa2f666-1f1f-4930-a728-81b27d74a0f8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\") " pod="openstack/cinder-scheduler-0" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.534960 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6aa2f666-1f1f-4930-a728-81b27d74a0f8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\") " pod="openstack/cinder-scheduler-0" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.534987 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa2f666-1f1f-4930-a728-81b27d74a0f8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\") " pod="openstack/cinder-scheduler-0" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.535038 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aa2f666-1f1f-4930-a728-81b27d74a0f8-config-data\") pod \"cinder-scheduler-0\" (UID: \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\") " pod="openstack/cinder-scheduler-0" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.535073 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjp9t\" (UniqueName: \"kubernetes.io/projected/6aa2f666-1f1f-4930-a728-81b27d74a0f8-kube-api-access-fjp9t\") pod \"cinder-scheduler-0\" (UID: \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\") " pod="openstack/cinder-scheduler-0" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.599758 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-df895c6d9-tzwbz"] Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.643107 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa2f666-1f1f-4930-a728-81b27d74a0f8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\") " pod="openstack/cinder-scheduler-0" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.643177 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aa2f666-1f1f-4930-a728-81b27d74a0f8-config-data\") pod \"cinder-scheduler-0\" (UID: \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\") " pod="openstack/cinder-scheduler-0" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.643236 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjp9t\" (UniqueName: \"kubernetes.io/projected/6aa2f666-1f1f-4930-a728-81b27d74a0f8-kube-api-access-fjp9t\") pod \"cinder-scheduler-0\" (UID: \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\") " pod="openstack/cinder-scheduler-0" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.643308 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6aa2f666-1f1f-4930-a728-81b27d74a0f8-scripts\") pod \"cinder-scheduler-0\" (UID: \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\") " pod="openstack/cinder-scheduler-0" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.643386 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6aa2f666-1f1f-4930-a728-81b27d74a0f8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\") " pod="openstack/cinder-scheduler-0" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.643419 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6aa2f666-1f1f-4930-a728-81b27d74a0f8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\") " pod="openstack/cinder-scheduler-0" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.654870 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6aa2f666-1f1f-4930-a728-81b27d74a0f8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\") " pod="openstack/cinder-scheduler-0" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.678800 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6aa2f666-1f1f-4930-a728-81b27d74a0f8-scripts\") pod \"cinder-scheduler-0\" (UID: \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\") " pod="openstack/cinder-scheduler-0" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.680353 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa2f666-1f1f-4930-a728-81b27d74a0f8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\") " pod="openstack/cinder-scheduler-0" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.704568 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6aa2f666-1f1f-4930-a728-81b27d74a0f8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\") " pod="openstack/cinder-scheduler-0" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.705968 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aa2f666-1f1f-4930-a728-81b27d74a0f8-config-data\") pod \"cinder-scheduler-0\" (UID: \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\") " pod="openstack/cinder-scheduler-0" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.730468 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjp9t\" (UniqueName: \"kubernetes.io/projected/6aa2f666-1f1f-4930-a728-81b27d74a0f8-kube-api-access-fjp9t\") pod \"cinder-scheduler-0\" (UID: \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\") " pod="openstack/cinder-scheduler-0" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.752714 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b9dc84c57-c6khp"] Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.754356 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.784495 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b9dc84c57-c6khp"] Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.842040 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-576979cb46-vpljd" event={"ID":"d367d201-b052-4399-999b-a10e9b8a515f","Type":"ContainerStarted","Data":"3aae6f506f5bbfcc99a6d3c3aa2d8de036ec9bd09f3801585fbd20198de878bf"} Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.851261 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-ovsdbserver-nb\") pod \"dnsmasq-dns-5b9dc84c57-c6khp\" (UID: \"879441cf-44a7-458b-8cfe-1ac422e1f34d\") " pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.851371 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-config\") pod \"dnsmasq-dns-5b9dc84c57-c6khp\" (UID: \"879441cf-44a7-458b-8cfe-1ac422e1f34d\") " pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.851397 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-ovsdbserver-sb\") pod \"dnsmasq-dns-5b9dc84c57-c6khp\" (UID: \"879441cf-44a7-458b-8cfe-1ac422e1f34d\") " pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.851430 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-dns-svc\") pod \"dnsmasq-dns-5b9dc84c57-c6khp\" (UID: \"879441cf-44a7-458b-8cfe-1ac422e1f34d\") " pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.851451 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-dns-swift-storage-0\") pod \"dnsmasq-dns-5b9dc84c57-c6khp\" (UID: \"879441cf-44a7-458b-8cfe-1ac422e1f34d\") " pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.851473 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnr5x\" (UniqueName: \"kubernetes.io/projected/879441cf-44a7-458b-8cfe-1ac422e1f34d-kube-api-access-mnr5x\") pod \"dnsmasq-dns-5b9dc84c57-c6khp\" (UID: \"879441cf-44a7-458b-8cfe-1ac422e1f34d\") " pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.863178 4823 generic.go:334] "Generic (PLEG): container finished" podID="8db6f8d1-3006-4e17-a979-7777a0919c7e" containerID="c6734a4efe1449491425566b65e13adaea3d60b6ad2eb5cf7fb173129d8f14f8" exitCode=143 Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.863289 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"8db6f8d1-3006-4e17-a979-7777a0919c7e","Type":"ContainerDied","Data":"c6734a4efe1449491425566b65e13adaea3d60b6ad2eb5cf7fb173129d8f14f8"} Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.876562 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-689f45894f-mlpws" event={"ID":"76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9","Type":"ContainerStarted","Data":"7f03ba54edd5ced5636154e9e8d92cf311cc0bd9209ebea2537d5a271c560b22"} Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.885440 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.886108 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7fcb8dc678-hn4ms" event={"ID":"ab1d7d34-2799-4553-9895-57c3c573cda2","Type":"ContainerStarted","Data":"c9e120bc57cfe48f4ea91a424cec7215005e1f83082da381be755765737fd71a"} Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.954078 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-config\") pod \"dnsmasq-dns-5b9dc84c57-c6khp\" (UID: \"879441cf-44a7-458b-8cfe-1ac422e1f34d\") " pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.954141 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-ovsdbserver-sb\") pod \"dnsmasq-dns-5b9dc84c57-c6khp\" (UID: \"879441cf-44a7-458b-8cfe-1ac422e1f34d\") " pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.954183 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-dns-svc\") pod \"dnsmasq-dns-5b9dc84c57-c6khp\" (UID: \"879441cf-44a7-458b-8cfe-1ac422e1f34d\") " pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.954208 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-dns-swift-storage-0\") pod \"dnsmasq-dns-5b9dc84c57-c6khp\" (UID: \"879441cf-44a7-458b-8cfe-1ac422e1f34d\") " pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.954274 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnr5x\" (UniqueName: \"kubernetes.io/projected/879441cf-44a7-458b-8cfe-1ac422e1f34d-kube-api-access-mnr5x\") pod \"dnsmasq-dns-5b9dc84c57-c6khp\" (UID: \"879441cf-44a7-458b-8cfe-1ac422e1f34d\") " pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.954484 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-ovsdbserver-nb\") pod \"dnsmasq-dns-5b9dc84c57-c6khp\" (UID: \"879441cf-44a7-458b-8cfe-1ac422e1f34d\") " pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.955032 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.957376 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.957648 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-config\") pod \"dnsmasq-dns-5b9dc84c57-c6khp\" (UID: \"879441cf-44a7-458b-8cfe-1ac422e1f34d\") " pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.957966 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-689f45894f-mlpws" podStartSLOduration=4.872168189 podStartE2EDuration="7.95794157s" podCreationTimestamp="2025-12-06 06:47:42 +0000 UTC" firstStartedPulling="2025-12-06 06:47:44.297917857 +0000 UTC m=+1365.583669817" lastFinishedPulling="2025-12-06 06:47:47.383691238 +0000 UTC m=+1368.669443198" observedRunningTime="2025-12-06 06:47:49.920150898 +0000 UTC m=+1371.205902878" watchObservedRunningTime="2025-12-06 06:47:49.95794157 +0000 UTC m=+1371.243693530" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.958259 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-ovsdbserver-sb\") pod \"dnsmasq-dns-5b9dc84c57-c6khp\" (UID: \"879441cf-44a7-458b-8cfe-1ac422e1f34d\") " pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.959000 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-dns-svc\") pod \"dnsmasq-dns-5b9dc84c57-c6khp\" (UID: \"879441cf-44a7-458b-8cfe-1ac422e1f34d\") " pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.960464 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-ovsdbserver-nb\") pod \"dnsmasq-dns-5b9dc84c57-c6khp\" (UID: \"879441cf-44a7-458b-8cfe-1ac422e1f34d\") " pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.972226 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-dns-swift-storage-0\") pod \"dnsmasq-dns-5b9dc84c57-c6khp\" (UID: \"879441cf-44a7-458b-8cfe-1ac422e1f34d\") " pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" Dec 06 06:47:49 crc kubenswrapper[4823]: I1206 06:47:49.984457 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.032616 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnr5x\" (UniqueName: \"kubernetes.io/projected/879441cf-44a7-458b-8cfe-1ac422e1f34d-kube-api-access-mnr5x\") pod \"dnsmasq-dns-5b9dc84c57-c6khp\" (UID: \"879441cf-44a7-458b-8cfe-1ac422e1f34d\") " pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.035679 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.041570 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7fcb8dc678-hn4ms" podStartSLOduration=4.571166572 podStartE2EDuration="8.041543475s" podCreationTimestamp="2025-12-06 06:47:42 +0000 UTC" firstStartedPulling="2025-12-06 06:47:43.934216479 +0000 UTC m=+1365.219968439" lastFinishedPulling="2025-12-06 06:47:47.404593382 +0000 UTC m=+1368.690345342" observedRunningTime="2025-12-06 06:47:49.990029577 +0000 UTC m=+1371.275781537" watchObservedRunningTime="2025-12-06 06:47:50.041543475 +0000 UTC m=+1371.327295435" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.059129 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lnw8\" (UniqueName: \"kubernetes.io/projected/56b86320-1d45-4a65-8be8-b2f0d6a6395e-kube-api-access-7lnw8\") pod \"cinder-api-0\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " pod="openstack/cinder-api-0" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.059198 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56b86320-1d45-4a65-8be8-b2f0d6a6395e-scripts\") pod \"cinder-api-0\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " pod="openstack/cinder-api-0" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.059267 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/56b86320-1d45-4a65-8be8-b2f0d6a6395e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " pod="openstack/cinder-api-0" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.059329 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56b86320-1d45-4a65-8be8-b2f0d6a6395e-config-data\") pod \"cinder-api-0\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " pod="openstack/cinder-api-0" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.059384 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56b86320-1d45-4a65-8be8-b2f0d6a6395e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " pod="openstack/cinder-api-0" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.059433 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56b86320-1d45-4a65-8be8-b2f0d6a6395e-config-data-custom\") pod \"cinder-api-0\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " pod="openstack/cinder-api-0" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.059458 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56b86320-1d45-4a65-8be8-b2f0d6a6395e-logs\") pod \"cinder-api-0\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " pod="openstack/cinder-api-0" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.128590 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.162854 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56b86320-1d45-4a65-8be8-b2f0d6a6395e-config-data\") pod \"cinder-api-0\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " pod="openstack/cinder-api-0" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.162920 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56b86320-1d45-4a65-8be8-b2f0d6a6395e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " pod="openstack/cinder-api-0" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.162956 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56b86320-1d45-4a65-8be8-b2f0d6a6395e-config-data-custom\") pod \"cinder-api-0\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " pod="openstack/cinder-api-0" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.162975 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56b86320-1d45-4a65-8be8-b2f0d6a6395e-logs\") pod \"cinder-api-0\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " pod="openstack/cinder-api-0" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.163029 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lnw8\" (UniqueName: \"kubernetes.io/projected/56b86320-1d45-4a65-8be8-b2f0d6a6395e-kube-api-access-7lnw8\") pod \"cinder-api-0\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " pod="openstack/cinder-api-0" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.163058 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56b86320-1d45-4a65-8be8-b2f0d6a6395e-scripts\") pod \"cinder-api-0\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " pod="openstack/cinder-api-0" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.163101 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/56b86320-1d45-4a65-8be8-b2f0d6a6395e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " pod="openstack/cinder-api-0" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.163194 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/56b86320-1d45-4a65-8be8-b2f0d6a6395e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " pod="openstack/cinder-api-0" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.173187 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56b86320-1d45-4a65-8be8-b2f0d6a6395e-config-data\") pod \"cinder-api-0\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " pod="openstack/cinder-api-0" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.179251 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56b86320-1d45-4a65-8be8-b2f0d6a6395e-scripts\") pod \"cinder-api-0\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " pod="openstack/cinder-api-0" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.179765 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56b86320-1d45-4a65-8be8-b2f0d6a6395e-logs\") pod \"cinder-api-0\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " pod="openstack/cinder-api-0" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.183495 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56b86320-1d45-4a65-8be8-b2f0d6a6395e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " pod="openstack/cinder-api-0" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.186347 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56b86320-1d45-4a65-8be8-b2f0d6a6395e-config-data-custom\") pod \"cinder-api-0\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " pod="openstack/cinder-api-0" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.202252 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lnw8\" (UniqueName: \"kubernetes.io/projected/56b86320-1d45-4a65-8be8-b2f0d6a6395e-kube-api-access-7lnw8\") pod \"cinder-api-0\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " pod="openstack/cinder-api-0" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.205391 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.364124 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.465370 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.835346 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 06:47:50 crc kubenswrapper[4823]: W1206 06:47:50.867159 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6aa2f666_1f1f_4930_a728_81b27d74a0f8.slice/crio-c14030713e9b27660a8d9f606b023211449d1e88f81c02059a29a077dd12b1e2 WatchSource:0}: Error finding container c14030713e9b27660a8d9f606b023211449d1e88f81c02059a29a077dd12b1e2: Status 404 returned error can't find the container with id c14030713e9b27660a8d9f606b023211449d1e88f81c02059a29a077dd12b1e2 Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.941756 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-576979cb46-vpljd" event={"ID":"d367d201-b052-4399-999b-a10e9b8a515f","Type":"ContainerStarted","Data":"06467caecb4110ec90a64aef7953586bdb37166cdd0211a25e84ff3af43728e2"} Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.942996 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.943035 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.945943 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6aa2f666-1f1f-4930-a728-81b27d74a0f8","Type":"ContainerStarted","Data":"c14030713e9b27660a8d9f606b023211449d1e88f81c02059a29a077dd12b1e2"} Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.945987 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 06 06:47:50 crc kubenswrapper[4823]: I1206 06:47:50.946635 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" podUID="cfa91e4a-7dd0-410e-9915-5ebc0c265902" containerName="dnsmasq-dns" containerID="cri-o://f4477e942fb09f95f587ddabcdfd2a5d718c87c8a7d23bd54ff947c66e2536f9" gracePeriod=10 Dec 06 06:47:51 crc kubenswrapper[4823]: I1206 06:47:51.018441 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Dec 06 06:47:51 crc kubenswrapper[4823]: I1206 06:47:51.088011 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-576979cb46-vpljd" podStartSLOduration=4.087985618 podStartE2EDuration="4.087985618s" podCreationTimestamp="2025-12-06 06:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:47:50.982095448 +0000 UTC m=+1372.267847408" watchObservedRunningTime="2025-12-06 06:47:51.087985618 +0000 UTC m=+1372.373737578" Dec 06 06:47:51 crc kubenswrapper[4823]: I1206 06:47:51.262424 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b9dc84c57-c6khp"] Dec 06 06:47:51 crc kubenswrapper[4823]: I1206 06:47:51.501945 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 06 06:47:51 crc kubenswrapper[4823]: I1206 06:47:51.858351 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" Dec 06 06:47:51 crc kubenswrapper[4823]: I1206 06:47:51.960684 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-dns-svc\") pod \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\" (UID: \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\") " Dec 06 06:47:51 crc kubenswrapper[4823]: I1206 06:47:51.961226 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-dns-swift-storage-0\") pod \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\" (UID: \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\") " Dec 06 06:47:51 crc kubenswrapper[4823]: I1206 06:47:51.961301 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-config\") pod \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\" (UID: \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\") " Dec 06 06:47:51 crc kubenswrapper[4823]: I1206 06:47:51.961332 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzrwz\" (UniqueName: \"kubernetes.io/projected/cfa91e4a-7dd0-410e-9915-5ebc0c265902-kube-api-access-mzrwz\") pod \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\" (UID: \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\") " Dec 06 06:47:51 crc kubenswrapper[4823]: I1206 06:47:51.961351 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-ovsdbserver-nb\") pod \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\" (UID: \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\") " Dec 06 06:47:51 crc kubenswrapper[4823]: I1206 06:47:51.961371 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-ovsdbserver-sb\") pod \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\" (UID: \"cfa91e4a-7dd0-410e-9915-5ebc0c265902\") " Dec 06 06:47:51 crc kubenswrapper[4823]: I1206 06:47:51.988955 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfa91e4a-7dd0-410e-9915-5ebc0c265902-kube-api-access-mzrwz" (OuterVolumeSpecName: "kube-api-access-mzrwz") pod "cfa91e4a-7dd0-410e-9915-5ebc0c265902" (UID: "cfa91e4a-7dd0-410e-9915-5ebc0c265902"). InnerVolumeSpecName "kube-api-access-mzrwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.002073 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" event={"ID":"879441cf-44a7-458b-8cfe-1ac422e1f34d","Type":"ContainerStarted","Data":"5bd92eefc17b2fd6aeda7a781ce66d23f52b1e74af42657e2f471bcc07ca2bbb"} Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.037978 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"56b86320-1d45-4a65-8be8-b2f0d6a6395e","Type":"ContainerStarted","Data":"9ec87571fbe71a4dead32b81419de0af464000b2284825583a6e2cc4a21e079b"} Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.066437 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzrwz\" (UniqueName: \"kubernetes.io/projected/cfa91e4a-7dd0-410e-9915-5ebc0c265902-kube-api-access-mzrwz\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.076233 4823 generic.go:334] "Generic (PLEG): container finished" podID="cfa91e4a-7dd0-410e-9915-5ebc0c265902" containerID="f4477e942fb09f95f587ddabcdfd2a5d718c87c8a7d23bd54ff947c66e2536f9" exitCode=0 Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.076382 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" event={"ID":"cfa91e4a-7dd0-410e-9915-5ebc0c265902","Type":"ContainerDied","Data":"f4477e942fb09f95f587ddabcdfd2a5d718c87c8a7d23bd54ff947c66e2536f9"} Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.076441 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" event={"ID":"cfa91e4a-7dd0-410e-9915-5ebc0c265902","Type":"ContainerDied","Data":"9bc118ee9592320d053b3d2be95d665884affdab223fcf3e40ac1eb8e034f488"} Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.076478 4823 scope.go:117] "RemoveContainer" containerID="f4477e942fb09f95f587ddabcdfd2a5d718c87c8a7d23bd54ff947c66e2536f9" Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.076735 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df895c6d9-tzwbz" Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.115869 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cfa91e4a-7dd0-410e-9915-5ebc0c265902" (UID: "cfa91e4a-7dd0-410e-9915-5ebc0c265902"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.173182 4823 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.173705 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cfa91e4a-7dd0-410e-9915-5ebc0c265902" (UID: "cfa91e4a-7dd0-410e-9915-5ebc0c265902"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.176237 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cfa91e4a-7dd0-410e-9915-5ebc0c265902" (UID: "cfa91e4a-7dd0-410e-9915-5ebc0c265902"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.179959 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cfa91e4a-7dd0-410e-9915-5ebc0c265902" (UID: "cfa91e4a-7dd0-410e-9915-5ebc0c265902"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.255578 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-config" (OuterVolumeSpecName: "config") pod "cfa91e4a-7dd0-410e-9915-5ebc0c265902" (UID: "cfa91e4a-7dd0-410e-9915-5ebc0c265902"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.274090 4823 scope.go:117] "RemoveContainer" containerID="6052b88190ba73ac116c3b86d33e1c39c030adb187178caca1248356c664c907" Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.276783 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.276841 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.276856 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.276868 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfa91e4a-7dd0-410e-9915-5ebc0c265902-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:52 crc kubenswrapper[4823]: E1206 06:47:52.340620 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod879441cf_44a7_458b_8cfe_1ac422e1f34d.slice/crio-conmon-32ff300450287671c32063acc66caf696dc8f09ee67ab7d08c332c326ac60616.scope\": RecentStats: unable to find data in memory cache]" Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.362833 4823 scope.go:117] "RemoveContainer" containerID="f4477e942fb09f95f587ddabcdfd2a5d718c87c8a7d23bd54ff947c66e2536f9" Dec 06 06:47:52 crc kubenswrapper[4823]: E1206 06:47:52.363254 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4477e942fb09f95f587ddabcdfd2a5d718c87c8a7d23bd54ff947c66e2536f9\": container with ID starting with f4477e942fb09f95f587ddabcdfd2a5d718c87c8a7d23bd54ff947c66e2536f9 not found: ID does not exist" containerID="f4477e942fb09f95f587ddabcdfd2a5d718c87c8a7d23bd54ff947c66e2536f9" Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.363296 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4477e942fb09f95f587ddabcdfd2a5d718c87c8a7d23bd54ff947c66e2536f9"} err="failed to get container status \"f4477e942fb09f95f587ddabcdfd2a5d718c87c8a7d23bd54ff947c66e2536f9\": rpc error: code = NotFound desc = could not find container \"f4477e942fb09f95f587ddabcdfd2a5d718c87c8a7d23bd54ff947c66e2536f9\": container with ID starting with f4477e942fb09f95f587ddabcdfd2a5d718c87c8a7d23bd54ff947c66e2536f9 not found: ID does not exist" Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.363325 4823 scope.go:117] "RemoveContainer" containerID="6052b88190ba73ac116c3b86d33e1c39c030adb187178caca1248356c664c907" Dec 06 06:47:52 crc kubenswrapper[4823]: E1206 06:47:52.363832 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6052b88190ba73ac116c3b86d33e1c39c030adb187178caca1248356c664c907\": container with ID starting with 6052b88190ba73ac116c3b86d33e1c39c030adb187178caca1248356c664c907 not found: ID does not exist" containerID="6052b88190ba73ac116c3b86d33e1c39c030adb187178caca1248356c664c907" Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.363914 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6052b88190ba73ac116c3b86d33e1c39c030adb187178caca1248356c664c907"} err="failed to get container status \"6052b88190ba73ac116c3b86d33e1c39c030adb187178caca1248356c664c907\": rpc error: code = NotFound desc = could not find container \"6052b88190ba73ac116c3b86d33e1c39c030adb187178caca1248356c664c907\": container with ID starting with 6052b88190ba73ac116c3b86d33e1c39c030adb187178caca1248356c664c907 not found: ID does not exist" Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.383227 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-664df9559f-rrrdk" Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.462399 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-df895c6d9-tzwbz"] Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.472555 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-df895c6d9-tzwbz"] Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.641028 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="8db6f8d1-3006-4e17-a979-7777a0919c7e" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9322/\": read tcp 10.217.0.2:44840->10.217.0.164:9322: read: connection reset by peer" Dec 06 06:47:52 crc kubenswrapper[4823]: I1206 06:47:52.641075 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="8db6f8d1-3006-4e17-a979-7777a0919c7e" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.164:9322/\": read tcp 10.217.0.2:44826->10.217.0.164:9322: read: connection reset by peer" Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.098094 4823 generic.go:334] "Generic (PLEG): container finished" podID="879441cf-44a7-458b-8cfe-1ac422e1f34d" containerID="32ff300450287671c32063acc66caf696dc8f09ee67ab7d08c332c326ac60616" exitCode=0 Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.098163 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" event={"ID":"879441cf-44a7-458b-8cfe-1ac422e1f34d","Type":"ContainerDied","Data":"32ff300450287671c32063acc66caf696dc8f09ee67ab7d08c332c326ac60616"} Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.147755 4823 generic.go:334] "Generic (PLEG): container finished" podID="8db6f8d1-3006-4e17-a979-7777a0919c7e" containerID="cdca3b5f56c09f8ffa0a1b94a08ab68de64f8f38eefdd06ac4b3dbf1e6bb0077" exitCode=0 Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.230440 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfa91e4a-7dd0-410e-9915-5ebc0c265902" path="/var/lib/kubelet/pods/cfa91e4a-7dd0-410e-9915-5ebc0c265902/volumes" Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.231492 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"8db6f8d1-3006-4e17-a979-7777a0919c7e","Type":"ContainerDied","Data":"cdca3b5f56c09f8ffa0a1b94a08ab68de64f8f38eefdd06ac4b3dbf1e6bb0077"} Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.344163 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.445181 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 06 06:47:53 crc kubenswrapper[4823]: E1206 06:47:53.446212 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfa91e4a-7dd0-410e-9915-5ebc0c265902" containerName="dnsmasq-dns" Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.446231 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfa91e4a-7dd0-410e-9915-5ebc0c265902" containerName="dnsmasq-dns" Dec 06 06:47:53 crc kubenswrapper[4823]: E1206 06:47:53.446280 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfa91e4a-7dd0-410e-9915-5ebc0c265902" containerName="init" Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.446289 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfa91e4a-7dd0-410e-9915-5ebc0c265902" containerName="init" Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.454273 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfa91e4a-7dd0-410e-9915-5ebc0c265902" containerName="dnsmasq-dns" Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.456073 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.469180 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.473390 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.487262 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-cfdpm" Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.508219 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.540345 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc03fdf8-c76b-4330-b7fb-58142df075c3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bc03fdf8-c76b-4330-b7fb-58142df075c3\") " pod="openstack/openstackclient" Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.540550 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2l7z\" (UniqueName: \"kubernetes.io/projected/bc03fdf8-c76b-4330-b7fb-58142df075c3-kube-api-access-w2l7z\") pod \"openstackclient\" (UID: \"bc03fdf8-c76b-4330-b7fb-58142df075c3\") " pod="openstack/openstackclient" Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.540733 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bc03fdf8-c76b-4330-b7fb-58142df075c3-openstack-config\") pod \"openstackclient\" (UID: \"bc03fdf8-c76b-4330-b7fb-58142df075c3\") " pod="openstack/openstackclient" Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.540878 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bc03fdf8-c76b-4330-b7fb-58142df075c3-openstack-config-secret\") pod \"openstackclient\" (UID: \"bc03fdf8-c76b-4330-b7fb-58142df075c3\") " pod="openstack/openstackclient" Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.661744 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bc03fdf8-c76b-4330-b7fb-58142df075c3-openstack-config-secret\") pod \"openstackclient\" (UID: \"bc03fdf8-c76b-4330-b7fb-58142df075c3\") " pod="openstack/openstackclient" Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.662277 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc03fdf8-c76b-4330-b7fb-58142df075c3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bc03fdf8-c76b-4330-b7fb-58142df075c3\") " pod="openstack/openstackclient" Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.662399 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2l7z\" (UniqueName: \"kubernetes.io/projected/bc03fdf8-c76b-4330-b7fb-58142df075c3-kube-api-access-w2l7z\") pod \"openstackclient\" (UID: \"bc03fdf8-c76b-4330-b7fb-58142df075c3\") " pod="openstack/openstackclient" Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.662487 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bc03fdf8-c76b-4330-b7fb-58142df075c3-openstack-config\") pod \"openstackclient\" (UID: \"bc03fdf8-c76b-4330-b7fb-58142df075c3\") " pod="openstack/openstackclient" Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.663955 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bc03fdf8-c76b-4330-b7fb-58142df075c3-openstack-config\") pod \"openstackclient\" (UID: \"bc03fdf8-c76b-4330-b7fb-58142df075c3\") " pod="openstack/openstackclient" Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.695355 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2l7z\" (UniqueName: \"kubernetes.io/projected/bc03fdf8-c76b-4330-b7fb-58142df075c3-kube-api-access-w2l7z\") pod \"openstackclient\" (UID: \"bc03fdf8-c76b-4330-b7fb-58142df075c3\") " pod="openstack/openstackclient" Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.696218 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bc03fdf8-c76b-4330-b7fb-58142df075c3-openstack-config-secret\") pod \"openstackclient\" (UID: \"bc03fdf8-c76b-4330-b7fb-58142df075c3\") " pod="openstack/openstackclient" Dec 06 06:47:53 crc kubenswrapper[4823]: I1206 06:47:53.738562 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc03fdf8-c76b-4330-b7fb-58142df075c3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bc03fdf8-c76b-4330-b7fb-58142df075c3\") " pod="openstack/openstackclient" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.008473 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.022438 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.077315 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8db6f8d1-3006-4e17-a979-7777a0919c7e-custom-prometheus-ca\") pod \"8db6f8d1-3006-4e17-a979-7777a0919c7e\" (UID: \"8db6f8d1-3006-4e17-a979-7777a0919c7e\") " Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.077733 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8db6f8d1-3006-4e17-a979-7777a0919c7e-logs\") pod \"8db6f8d1-3006-4e17-a979-7777a0919c7e\" (UID: \"8db6f8d1-3006-4e17-a979-7777a0919c7e\") " Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.077864 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whfxs\" (UniqueName: \"kubernetes.io/projected/8db6f8d1-3006-4e17-a979-7777a0919c7e-kube-api-access-whfxs\") pod \"8db6f8d1-3006-4e17-a979-7777a0919c7e\" (UID: \"8db6f8d1-3006-4e17-a979-7777a0919c7e\") " Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.078001 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db6f8d1-3006-4e17-a979-7777a0919c7e-combined-ca-bundle\") pod \"8db6f8d1-3006-4e17-a979-7777a0919c7e\" (UID: \"8db6f8d1-3006-4e17-a979-7777a0919c7e\") " Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.078037 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8db6f8d1-3006-4e17-a979-7777a0919c7e-config-data\") pod \"8db6f8d1-3006-4e17-a979-7777a0919c7e\" (UID: \"8db6f8d1-3006-4e17-a979-7777a0919c7e\") " Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.085023 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8db6f8d1-3006-4e17-a979-7777a0919c7e-logs" (OuterVolumeSpecName: "logs") pod "8db6f8d1-3006-4e17-a979-7777a0919c7e" (UID: "8db6f8d1-3006-4e17-a979-7777a0919c7e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.123962 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8db6f8d1-3006-4e17-a979-7777a0919c7e-kube-api-access-whfxs" (OuterVolumeSpecName: "kube-api-access-whfxs") pod "8db6f8d1-3006-4e17-a979-7777a0919c7e" (UID: "8db6f8d1-3006-4e17-a979-7777a0919c7e"). InnerVolumeSpecName "kube-api-access-whfxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.183408 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8db6f8d1-3006-4e17-a979-7777a0919c7e-logs\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.183453 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whfxs\" (UniqueName: \"kubernetes.io/projected/8db6f8d1-3006-4e17-a979-7777a0919c7e-kube-api-access-whfxs\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.209599 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"8db6f8d1-3006-4e17-a979-7777a0919c7e","Type":"ContainerDied","Data":"99ae071c3ff352e4d0e258bd996b4fd860bb3895997540a872eb9ee4f486505a"} Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.209678 4823 scope.go:117] "RemoveContainer" containerID="cdca3b5f56c09f8ffa0a1b94a08ab68de64f8f38eefdd06ac4b3dbf1e6bb0077" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.209835 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.247469 4823 generic.go:334] "Generic (PLEG): container finished" podID="738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" containerID="e331f4421044ffd6bb90b95a39cce22e9c826aec0947cf1211ff68f01deaa4f1" exitCode=1 Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.247889 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24","Type":"ContainerDied","Data":"e331f4421044ffd6bb90b95a39cce22e9c826aec0947cf1211ff68f01deaa4f1"} Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.248779 4823 scope.go:117] "RemoveContainer" containerID="e331f4421044ffd6bb90b95a39cce22e9c826aec0947cf1211ff68f01deaa4f1" Dec 06 06:47:54 crc kubenswrapper[4823]: E1206 06:47:54.249099 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24)\"" pod="openstack/watcher-decision-engine-0" podUID="738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.252373 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6aa2f666-1f1f-4930-a728-81b27d74a0f8","Type":"ContainerStarted","Data":"ce9e0741c3380a1227df57cce5b2d72c1a3177dc3b33c0439fbc5e078864b731"} Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.331837 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8db6f8d1-3006-4e17-a979-7777a0919c7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8db6f8d1-3006-4e17-a979-7777a0919c7e" (UID: "8db6f8d1-3006-4e17-a979-7777a0919c7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.331992 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8db6f8d1-3006-4e17-a979-7777a0919c7e-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "8db6f8d1-3006-4e17-a979-7777a0919c7e" (UID: "8db6f8d1-3006-4e17-a979-7777a0919c7e"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.352298 4823 scope.go:117] "RemoveContainer" containerID="c6734a4efe1449491425566b65e13adaea3d60b6ad2eb5cf7fb173129d8f14f8" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.399426 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db6f8d1-3006-4e17-a979-7777a0919c7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.399465 4823 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8db6f8d1-3006-4e17-a979-7777a0919c7e-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.449930 4823 scope.go:117] "RemoveContainer" containerID="6e25610a99f6f361047fecd99d184be47e69038785c4aa0ac4fb0e5acb0c4398" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.472863 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8db6f8d1-3006-4e17-a979-7777a0919c7e-config-data" (OuterVolumeSpecName: "config-data") pod "8db6f8d1-3006-4e17-a979-7777a0919c7e" (UID: "8db6f8d1-3006-4e17-a979-7777a0919c7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.502887 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8db6f8d1-3006-4e17-a979-7777a0919c7e-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.663325 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.722716 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.774771 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Dec 06 06:47:54 crc kubenswrapper[4823]: E1206 06:47:54.775801 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8db6f8d1-3006-4e17-a979-7777a0919c7e" containerName="watcher-api" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.775822 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8db6f8d1-3006-4e17-a979-7777a0919c7e" containerName="watcher-api" Dec 06 06:47:54 crc kubenswrapper[4823]: E1206 06:47:54.775856 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8db6f8d1-3006-4e17-a979-7777a0919c7e" containerName="watcher-api-log" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.775866 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8db6f8d1-3006-4e17-a979-7777a0919c7e" containerName="watcher-api-log" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.776133 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="8db6f8d1-3006-4e17-a979-7777a0919c7e" containerName="watcher-api-log" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.776165 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="8db6f8d1-3006-4e17-a979-7777a0919c7e" containerName="watcher-api" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.777754 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.785562 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.785871 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.786022 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.791273 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.813201 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nwfj\" (UniqueName: \"kubernetes.io/projected/a9651ba4-0674-42c6-bd38-cd1d83e8a0d7-kube-api-access-8nwfj\") pod \"watcher-api-0\" (UID: \"a9651ba4-0674-42c6-bd38-cd1d83e8a0d7\") " pod="openstack/watcher-api-0" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.813325 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9651ba4-0674-42c6-bd38-cd1d83e8a0d7-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"a9651ba4-0674-42c6-bd38-cd1d83e8a0d7\") " pod="openstack/watcher-api-0" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.813371 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9651ba4-0674-42c6-bd38-cd1d83e8a0d7-config-data\") pod \"watcher-api-0\" (UID: \"a9651ba4-0674-42c6-bd38-cd1d83e8a0d7\") " pod="openstack/watcher-api-0" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.813395 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9651ba4-0674-42c6-bd38-cd1d83e8a0d7-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"a9651ba4-0674-42c6-bd38-cd1d83e8a0d7\") " pod="openstack/watcher-api-0" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.813472 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a9651ba4-0674-42c6-bd38-cd1d83e8a0d7-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"a9651ba4-0674-42c6-bd38-cd1d83e8a0d7\") " pod="openstack/watcher-api-0" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.813521 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9651ba4-0674-42c6-bd38-cd1d83e8a0d7-logs\") pod \"watcher-api-0\" (UID: \"a9651ba4-0674-42c6-bd38-cd1d83e8a0d7\") " pod="openstack/watcher-api-0" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.813720 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9651ba4-0674-42c6-bd38-cd1d83e8a0d7-public-tls-certs\") pod \"watcher-api-0\" (UID: \"a9651ba4-0674-42c6-bd38-cd1d83e8a0d7\") " pod="openstack/watcher-api-0" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.841286 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5d5795f4fd-qb9w4" podUID="9bceb03c-e3de-4bfd-b163-69cde861ce00" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.169:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.918058 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9651ba4-0674-42c6-bd38-cd1d83e8a0d7-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"a9651ba4-0674-42c6-bd38-cd1d83e8a0d7\") " pod="openstack/watcher-api-0" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.918132 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9651ba4-0674-42c6-bd38-cd1d83e8a0d7-config-data\") pod \"watcher-api-0\" (UID: \"a9651ba4-0674-42c6-bd38-cd1d83e8a0d7\") " pod="openstack/watcher-api-0" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.918155 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9651ba4-0674-42c6-bd38-cd1d83e8a0d7-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"a9651ba4-0674-42c6-bd38-cd1d83e8a0d7\") " pod="openstack/watcher-api-0" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.918217 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a9651ba4-0674-42c6-bd38-cd1d83e8a0d7-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"a9651ba4-0674-42c6-bd38-cd1d83e8a0d7\") " pod="openstack/watcher-api-0" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.918247 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9651ba4-0674-42c6-bd38-cd1d83e8a0d7-logs\") pod \"watcher-api-0\" (UID: \"a9651ba4-0674-42c6-bd38-cd1d83e8a0d7\") " pod="openstack/watcher-api-0" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.918278 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9651ba4-0674-42c6-bd38-cd1d83e8a0d7-public-tls-certs\") pod \"watcher-api-0\" (UID: \"a9651ba4-0674-42c6-bd38-cd1d83e8a0d7\") " pod="openstack/watcher-api-0" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.918324 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nwfj\" (UniqueName: \"kubernetes.io/projected/a9651ba4-0674-42c6-bd38-cd1d83e8a0d7-kube-api-access-8nwfj\") pod \"watcher-api-0\" (UID: \"a9651ba4-0674-42c6-bd38-cd1d83e8a0d7\") " pod="openstack/watcher-api-0" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.920175 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9651ba4-0674-42c6-bd38-cd1d83e8a0d7-logs\") pod \"watcher-api-0\" (UID: \"a9651ba4-0674-42c6-bd38-cd1d83e8a0d7\") " pod="openstack/watcher-api-0" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.930056 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9651ba4-0674-42c6-bd38-cd1d83e8a0d7-public-tls-certs\") pod \"watcher-api-0\" (UID: \"a9651ba4-0674-42c6-bd38-cd1d83e8a0d7\") " pod="openstack/watcher-api-0" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.930199 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a9651ba4-0674-42c6-bd38-cd1d83e8a0d7-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"a9651ba4-0674-42c6-bd38-cd1d83e8a0d7\") " pod="openstack/watcher-api-0" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.939573 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9651ba4-0674-42c6-bd38-cd1d83e8a0d7-config-data\") pod \"watcher-api-0\" (UID: \"a9651ba4-0674-42c6-bd38-cd1d83e8a0d7\") " pod="openstack/watcher-api-0" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.940273 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9651ba4-0674-42c6-bd38-cd1d83e8a0d7-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"a9651ba4-0674-42c6-bd38-cd1d83e8a0d7\") " pod="openstack/watcher-api-0" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.942650 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9651ba4-0674-42c6-bd38-cd1d83e8a0d7-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"a9651ba4-0674-42c6-bd38-cd1d83e8a0d7\") " pod="openstack/watcher-api-0" Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.947485 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 06 06:47:54 crc kubenswrapper[4823]: I1206 06:47:54.949950 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nwfj\" (UniqueName: \"kubernetes.io/projected/a9651ba4-0674-42c6-bd38-cd1d83e8a0d7-kube-api-access-8nwfj\") pod \"watcher-api-0\" (UID: \"a9651ba4-0674-42c6-bd38-cd1d83e8a0d7\") " pod="openstack/watcher-api-0" Dec 06 06:47:55 crc kubenswrapper[4823]: I1206 06:47:55.169491 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8db6f8d1-3006-4e17-a979-7777a0919c7e" path="/var/lib/kubelet/pods/8db6f8d1-3006-4e17-a979-7777a0919c7e/volumes" Dec 06 06:47:55 crc kubenswrapper[4823]: I1206 06:47:55.210552 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 06 06:47:55 crc kubenswrapper[4823]: I1206 06:47:55.328368 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" event={"ID":"879441cf-44a7-458b-8cfe-1ac422e1f34d","Type":"ContainerStarted","Data":"aaffe498cbaa476bc7ccbc4e66f85277575a3a946b142c976544dd6cfabf5f92"} Dec 06 06:47:55 crc kubenswrapper[4823]: I1206 06:47:55.328491 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" Dec 06 06:47:55 crc kubenswrapper[4823]: I1206 06:47:55.362433 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" podStartSLOduration=6.362384737 podStartE2EDuration="6.362384737s" podCreationTimestamp="2025-12-06 06:47:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:47:55.360786741 +0000 UTC m=+1376.646538711" watchObservedRunningTime="2025-12-06 06:47:55.362384737 +0000 UTC m=+1376.648136697" Dec 06 06:47:55 crc kubenswrapper[4823]: I1206 06:47:55.379019 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6aa2f666-1f1f-4930-a728-81b27d74a0f8","Type":"ContainerStarted","Data":"c037fe83a3e74d753802e3502904161196d7284b68a052fdcd7b6a41bfe6d55d"} Dec 06 06:47:55 crc kubenswrapper[4823]: I1206 06:47:55.390885 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"bc03fdf8-c76b-4330-b7fb-58142df075c3","Type":"ContainerStarted","Data":"e5e9899473098bc5053bc6671e6f11a17689298109a5d24b858661b1c5b490be"} Dec 06 06:47:55 crc kubenswrapper[4823]: I1206 06:47:55.400895 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"56b86320-1d45-4a65-8be8-b2f0d6a6395e","Type":"ContainerStarted","Data":"407264a4c60744014a2edb04d8a9de85229d670b680eeafe9a5c8d053fbe881a"} Dec 06 06:47:55 crc kubenswrapper[4823]: I1206 06:47:55.421980 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.010309955 podStartE2EDuration="6.421959258s" podCreationTimestamp="2025-12-06 06:47:49 +0000 UTC" firstStartedPulling="2025-12-06 06:47:50.870231216 +0000 UTC m=+1372.155983176" lastFinishedPulling="2025-12-06 06:47:51.281880519 +0000 UTC m=+1372.567632479" observedRunningTime="2025-12-06 06:47:55.418960582 +0000 UTC m=+1376.704712542" watchObservedRunningTime="2025-12-06 06:47:55.421959258 +0000 UTC m=+1376.707711218" Dec 06 06:47:55 crc kubenswrapper[4823]: W1206 06:47:55.941342 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9651ba4_0674_42c6_bd38_cd1d83e8a0d7.slice/crio-cc483ad511d43782d1d91173e6707de8bce49dc877bc767857d14950b132e5d3 WatchSource:0}: Error finding container cc483ad511d43782d1d91173e6707de8bce49dc877bc767857d14950b132e5d3: Status 404 returned error can't find the container with id cc483ad511d43782d1d91173e6707de8bce49dc877bc767857d14950b132e5d3 Dec 06 06:47:55 crc kubenswrapper[4823]: I1206 06:47:55.980294 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Dec 06 06:47:56 crc kubenswrapper[4823]: I1206 06:47:56.442012 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"56b86320-1d45-4a65-8be8-b2f0d6a6395e","Type":"ContainerStarted","Data":"552ac67794564e022b04a79839d88acc87f37a4568bb8747cf005e4a096f7726"} Dec 06 06:47:56 crc kubenswrapper[4823]: I1206 06:47:56.442287 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="56b86320-1d45-4a65-8be8-b2f0d6a6395e" containerName="cinder-api-log" containerID="cri-o://407264a4c60744014a2edb04d8a9de85229d670b680eeafe9a5c8d053fbe881a" gracePeriod=30 Dec 06 06:47:56 crc kubenswrapper[4823]: I1206 06:47:56.442414 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="56b86320-1d45-4a65-8be8-b2f0d6a6395e" containerName="cinder-api" containerID="cri-o://552ac67794564e022b04a79839d88acc87f37a4568bb8747cf005e4a096f7726" gracePeriod=30 Dec 06 06:47:56 crc kubenswrapper[4823]: I1206 06:47:56.442852 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 06 06:47:56 crc kubenswrapper[4823]: I1206 06:47:56.448739 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a9651ba4-0674-42c6-bd38-cd1d83e8a0d7","Type":"ContainerStarted","Data":"cc483ad511d43782d1d91173e6707de8bce49dc877bc767857d14950b132e5d3"} Dec 06 06:47:56 crc kubenswrapper[4823]: I1206 06:47:56.832481 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5d5795f4fd-qb9w4" Dec 06 06:47:56 crc kubenswrapper[4823]: I1206 06:47:56.858943 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.858920223 podStartE2EDuration="7.858920223s" podCreationTimestamp="2025-12-06 06:47:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:47:56.475131595 +0000 UTC m=+1377.760883565" watchObservedRunningTime="2025-12-06 06:47:56.858920223 +0000 UTC m=+1378.144672183" Dec 06 06:47:56 crc kubenswrapper[4823]: I1206 06:47:56.971992 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5d5795f4fd-qb9w4" Dec 06 06:47:57 crc kubenswrapper[4823]: I1206 06:47:57.485040 4823 generic.go:334] "Generic (PLEG): container finished" podID="56b86320-1d45-4a65-8be8-b2f0d6a6395e" containerID="407264a4c60744014a2edb04d8a9de85229d670b680eeafe9a5c8d053fbe881a" exitCode=143 Dec 06 06:47:57 crc kubenswrapper[4823]: I1206 06:47:57.485613 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"56b86320-1d45-4a65-8be8-b2f0d6a6395e","Type":"ContainerDied","Data":"407264a4c60744014a2edb04d8a9de85229d670b680eeafe9a5c8d053fbe881a"} Dec 06 06:47:57 crc kubenswrapper[4823]: I1206 06:47:57.494505 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a9651ba4-0674-42c6-bd38-cd1d83e8a0d7","Type":"ContainerStarted","Data":"cf9ff151e3813ba38e7b6c6fe635970503a30ff3cc08c1225a016815b01ca862"} Dec 06 06:47:57 crc kubenswrapper[4823]: I1206 06:47:57.494549 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a9651ba4-0674-42c6-bd38-cd1d83e8a0d7","Type":"ContainerStarted","Data":"9731b6596e87f21ec00bd165b0daaaf809c4b3c8f487584b289bbaa153443d19"} Dec 06 06:47:57 crc kubenswrapper[4823]: I1206 06:47:57.495355 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Dec 06 06:47:57 crc kubenswrapper[4823]: I1206 06:47:57.517844 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=3.5178198800000002 podStartE2EDuration="3.51781988s" podCreationTimestamp="2025-12-06 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:47:57.517573333 +0000 UTC m=+1378.803325293" watchObservedRunningTime="2025-12-06 06:47:57.51781988 +0000 UTC m=+1378.803571840" Dec 06 06:47:57 crc kubenswrapper[4823]: I1206 06:47:57.633973 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 06 06:47:58 crc kubenswrapper[4823]: I1206 06:47:58.886754 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cdc5bf4b4-qft5r" podUID="2bcc21a4-6b09-4804-86d5-85cc7f0267e7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.157:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.157:8443: connect: connection refused" Dec 06 06:47:58 crc kubenswrapper[4823]: I1206 06:47:58.887152 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:47:59 crc kubenswrapper[4823]: I1206 06:47:59.886649 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 06 06:48:00 crc kubenswrapper[4823]: I1206 06:48:00.141014 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" Dec 06 06:48:00 crc kubenswrapper[4823]: I1206 06:48:00.208815 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 06 06:48:00 crc kubenswrapper[4823]: I1206 06:48:00.208867 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 06 06:48:00 crc kubenswrapper[4823]: I1206 06:48:00.208880 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 06 06:48:00 crc kubenswrapper[4823]: I1206 06:48:00.209769 4823 scope.go:117] "RemoveContainer" containerID="e331f4421044ffd6bb90b95a39cce22e9c826aec0947cf1211ff68f01deaa4f1" Dec 06 06:48:00 crc kubenswrapper[4823]: E1206 06:48:00.210047 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24)\"" pod="openstack/watcher-decision-engine-0" podUID="738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" Dec 06 06:48:00 crc kubenswrapper[4823]: I1206 06:48:00.217761 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Dec 06 06:48:00 crc kubenswrapper[4823]: I1206 06:48:00.217859 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 06:48:00 crc kubenswrapper[4823]: I1206 06:48:00.314385 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c669f5d67-lzz9h"] Dec 06 06:48:00 crc kubenswrapper[4823]: I1206 06:48:00.314642 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" podUID="f3581c1d-97bc-41ba-80d4-89c0362131f7" containerName="dnsmasq-dns" containerID="cri-o://b7be0bf016888fe1effff48cefc952a6474ebc37a07f703469d2b5bb5f921a2e" gracePeriod=10 Dec 06 06:48:00 crc kubenswrapper[4823]: I1206 06:48:00.595471 4823 generic.go:334] "Generic (PLEG): container finished" podID="f3581c1d-97bc-41ba-80d4-89c0362131f7" containerID="b7be0bf016888fe1effff48cefc952a6474ebc37a07f703469d2b5bb5f921a2e" exitCode=0 Dec 06 06:48:00 crc kubenswrapper[4823]: I1206 06:48:00.595518 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" event={"ID":"f3581c1d-97bc-41ba-80d4-89c0362131f7","Type":"ContainerDied","Data":"b7be0bf016888fe1effff48cefc952a6474ebc37a07f703469d2b5bb5f921a2e"} Dec 06 06:48:00 crc kubenswrapper[4823]: I1206 06:48:00.605106 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 06 06:48:00 crc kubenswrapper[4823]: I1206 06:48:00.683059 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.172621 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.260448 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-dns-swift-storage-0\") pod \"f3581c1d-97bc-41ba-80d4-89c0362131f7\" (UID: \"f3581c1d-97bc-41ba-80d4-89c0362131f7\") " Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.260542 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-ovsdbserver-nb\") pod \"f3581c1d-97bc-41ba-80d4-89c0362131f7\" (UID: \"f3581c1d-97bc-41ba-80d4-89c0362131f7\") " Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.260578 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-ovsdbserver-sb\") pod \"f3581c1d-97bc-41ba-80d4-89c0362131f7\" (UID: \"f3581c1d-97bc-41ba-80d4-89c0362131f7\") " Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.260734 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c7jm\" (UniqueName: \"kubernetes.io/projected/f3581c1d-97bc-41ba-80d4-89c0362131f7-kube-api-access-9c7jm\") pod \"f3581c1d-97bc-41ba-80d4-89c0362131f7\" (UID: \"f3581c1d-97bc-41ba-80d4-89c0362131f7\") " Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.260849 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-dns-svc\") pod \"f3581c1d-97bc-41ba-80d4-89c0362131f7\" (UID: \"f3581c1d-97bc-41ba-80d4-89c0362131f7\") " Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.260928 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-config\") pod \"f3581c1d-97bc-41ba-80d4-89c0362131f7\" (UID: \"f3581c1d-97bc-41ba-80d4-89c0362131f7\") " Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.291527 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3581c1d-97bc-41ba-80d4-89c0362131f7-kube-api-access-9c7jm" (OuterVolumeSpecName: "kube-api-access-9c7jm") pod "f3581c1d-97bc-41ba-80d4-89c0362131f7" (UID: "f3581c1d-97bc-41ba-80d4-89c0362131f7"). InnerVolumeSpecName "kube-api-access-9c7jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.350985 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-config" (OuterVolumeSpecName: "config") pod "f3581c1d-97bc-41ba-80d4-89c0362131f7" (UID: "f3581c1d-97bc-41ba-80d4-89c0362131f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.355107 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f3581c1d-97bc-41ba-80d4-89c0362131f7" (UID: "f3581c1d-97bc-41ba-80d4-89c0362131f7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.363921 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f3581c1d-97bc-41ba-80d4-89c0362131f7" (UID: "f3581c1d-97bc-41ba-80d4-89c0362131f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.364615 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.364754 4823 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.364776 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9c7jm\" (UniqueName: \"kubernetes.io/projected/f3581c1d-97bc-41ba-80d4-89c0362131f7-kube-api-access-9c7jm\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.364799 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.412121 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f3581c1d-97bc-41ba-80d4-89c0362131f7" (UID: "f3581c1d-97bc-41ba-80d4-89c0362131f7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.430955 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f3581c1d-97bc-41ba-80d4-89c0362131f7" (UID: "f3581c1d-97bc-41ba-80d4-89c0362131f7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.466808 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.466854 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3581c1d-97bc-41ba-80d4-89c0362131f7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.623068 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" event={"ID":"f3581c1d-97bc-41ba-80d4-89c0362131f7","Type":"ContainerDied","Data":"2e3f60526b23a86cc6c8dda197919af32fa0e35f928515ecd71834c39fd76cbf"} Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.623503 4823 scope.go:117] "RemoveContainer" containerID="b7be0bf016888fe1effff48cefc952a6474ebc37a07f703469d2b5bb5f921a2e" Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.623102 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c669f5d67-lzz9h" Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.623244 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6aa2f666-1f1f-4930-a728-81b27d74a0f8" containerName="cinder-scheduler" containerID="cri-o://ce9e0741c3380a1227df57cce5b2d72c1a3177dc3b33c0439fbc5e078864b731" gracePeriod=30 Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.623287 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6aa2f666-1f1f-4930-a728-81b27d74a0f8" containerName="probe" containerID="cri-o://c037fe83a3e74d753802e3502904161196d7284b68a052fdcd7b6a41bfe6d55d" gracePeriod=30 Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.672903 4823 scope.go:117] "RemoveContainer" containerID="3c5ff6eb0153aab2547a4b8810250dbdf697cbec57b0746a908a18fbdc06b5cb" Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.674802 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c669f5d67-lzz9h"] Dec 06 06:48:01 crc kubenswrapper[4823]: I1206 06:48:01.700994 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c669f5d67-lzz9h"] Dec 06 06:48:02 crc kubenswrapper[4823]: I1206 06:48:02.011656 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:48:02 crc kubenswrapper[4823]: I1206 06:48:02.504833 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="a9651ba4-0674-42c6-bd38-cd1d83e8a0d7" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.175:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 06:48:02 crc kubenswrapper[4823]: I1206 06:48:02.657910 4823 generic.go:334] "Generic (PLEG): container finished" podID="f5301842-d5df-4df6-8699-56f86789df64" containerID="402d507b0a3393646cdc9b117e0bfb305e3b278e50d00ef1db371f76043ae9e7" exitCode=0 Dec 06 06:48:02 crc kubenswrapper[4823]: I1206 06:48:02.658242 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9fd7r" event={"ID":"f5301842-d5df-4df6-8699-56f86789df64","Type":"ContainerDied","Data":"402d507b0a3393646cdc9b117e0bfb305e3b278e50d00ef1db371f76043ae9e7"} Dec 06 06:48:02 crc kubenswrapper[4823]: I1206 06:48:02.908997 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-576979cb46-vpljd" podUID="d367d201-b052-4399-999b-a10e9b8a515f" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.170:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 06:48:02 crc kubenswrapper[4823]: I1206 06:48:02.927724 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Dec 06 06:48:03 crc kubenswrapper[4823]: I1206 06:48:03.152714 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3581c1d-97bc-41ba-80d4-89c0362131f7" path="/var/lib/kubelet/pods/f3581c1d-97bc-41ba-80d4-89c0362131f7/volumes" Dec 06 06:48:04 crc kubenswrapper[4823]: I1206 06:48:04.701037 4823 generic.go:334] "Generic (PLEG): container finished" podID="6aa2f666-1f1f-4930-a728-81b27d74a0f8" containerID="c037fe83a3e74d753802e3502904161196d7284b68a052fdcd7b6a41bfe6d55d" exitCode=0 Dec 06 06:48:04 crc kubenswrapper[4823]: I1206 06:48:04.701388 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6aa2f666-1f1f-4930-a728-81b27d74a0f8","Type":"ContainerDied","Data":"c037fe83a3e74d753802e3502904161196d7284b68a052fdcd7b6a41bfe6d55d"} Dec 06 06:48:05 crc kubenswrapper[4823]: I1206 06:48:05.035535 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-576979cb46-vpljd" Dec 06 06:48:05 crc kubenswrapper[4823]: I1206 06:48:05.097380 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5d5795f4fd-qb9w4"] Dec 06 06:48:05 crc kubenswrapper[4823]: I1206 06:48:05.097689 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5d5795f4fd-qb9w4" podUID="9bceb03c-e3de-4bfd-b163-69cde861ce00" containerName="barbican-api-log" containerID="cri-o://ca4533920163b383473cd1971a80d407da0e40d88b9bc268c6b16e0d655056e1" gracePeriod=30 Dec 06 06:48:05 crc kubenswrapper[4823]: I1206 06:48:05.097857 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5d5795f4fd-qb9w4" podUID="9bceb03c-e3de-4bfd-b163-69cde861ce00" containerName="barbican-api" containerID="cri-o://9184cb05953d1a426eb45a5642e06bd4b7651e1b2717dc0488c41a48546bc559" gracePeriod=30 Dec 06 06:48:05 crc kubenswrapper[4823]: I1206 06:48:05.214265 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Dec 06 06:48:05 crc kubenswrapper[4823]: I1206 06:48:05.263041 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Dec 06 06:48:05 crc kubenswrapper[4823]: I1206 06:48:05.388616 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:48:05 crc kubenswrapper[4823]: I1206 06:48:05.390077 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-769d66bc44-mzlht" Dec 06 06:48:05 crc kubenswrapper[4823]: I1206 06:48:05.650918 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="56b86320-1d45-4a65-8be8-b2f0d6a6395e" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.173:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 06:48:05 crc kubenswrapper[4823]: I1206 06:48:05.761093 4823 generic.go:334] "Generic (PLEG): container finished" podID="9bceb03c-e3de-4bfd-b163-69cde861ce00" containerID="ca4533920163b383473cd1971a80d407da0e40d88b9bc268c6b16e0d655056e1" exitCode=143 Dec 06 06:48:05 crc kubenswrapper[4823]: I1206 06:48:05.761209 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d5795f4fd-qb9w4" event={"ID":"9bceb03c-e3de-4bfd-b163-69cde861ce00","Type":"ContainerDied","Data":"ca4533920163b383473cd1971a80d407da0e40d88b9bc268c6b16e0d655056e1"} Dec 06 06:48:05 crc kubenswrapper[4823]: I1206 06:48:05.781902 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Dec 06 06:48:06 crc kubenswrapper[4823]: I1206 06:48:06.052111 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:48:06 crc kubenswrapper[4823]: I1206 06:48:06.052179 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:48:06 crc kubenswrapper[4823]: I1206 06:48:06.857718 4823 generic.go:334] "Generic (PLEG): container finished" podID="6aa2f666-1f1f-4930-a728-81b27d74a0f8" containerID="ce9e0741c3380a1227df57cce5b2d72c1a3177dc3b33c0439fbc5e078864b731" exitCode=0 Dec 06 06:48:06 crc kubenswrapper[4823]: I1206 06:48:06.858739 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6aa2f666-1f1f-4930-a728-81b27d74a0f8","Type":"ContainerDied","Data":"ce9e0741c3380a1227df57cce5b2d72c1a3177dc3b33c0439fbc5e078864b731"} Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.187247 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-f788j"] Dec 06 06:48:07 crc kubenswrapper[4823]: E1206 06:48:07.187855 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3581c1d-97bc-41ba-80d4-89c0362131f7" containerName="dnsmasq-dns" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.187870 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3581c1d-97bc-41ba-80d4-89c0362131f7" containerName="dnsmasq-dns" Dec 06 06:48:07 crc kubenswrapper[4823]: E1206 06:48:07.187914 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3581c1d-97bc-41ba-80d4-89c0362131f7" containerName="init" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.187921 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3581c1d-97bc-41ba-80d4-89c0362131f7" containerName="init" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.188120 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3581c1d-97bc-41ba-80d4-89c0362131f7" containerName="dnsmasq-dns" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.189032 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-f788j" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.201899 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-f788j"] Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.284968 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-qdzrg"] Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.290636 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qdzrg" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.294307 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-qdzrg"] Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.329743 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54f72\" (UniqueName: \"kubernetes.io/projected/7930a7dd-359c-4d6d-9a66-de8eaa5f6f60-kube-api-access-54f72\") pod \"nova-api-db-create-f788j\" (UID: \"7930a7dd-359c-4d6d-9a66-de8eaa5f6f60\") " pod="openstack/nova-api-db-create-f788j" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.331919 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7930a7dd-359c-4d6d-9a66-de8eaa5f6f60-operator-scripts\") pod \"nova-api-db-create-f788j\" (UID: \"7930a7dd-359c-4d6d-9a66-de8eaa5f6f60\") " pod="openstack/nova-api-db-create-f788j" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.338285 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-5f48-account-create-update-9428s"] Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.339998 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5f48-account-create-update-9428s" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.343986 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.360063 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-5f48-account-create-update-9428s"] Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.434960 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv78d\" (UniqueName: \"kubernetes.io/projected/0439c056-347b-4a05-95aa-e85289754ecc-kube-api-access-nv78d\") pod \"nova-api-5f48-account-create-update-9428s\" (UID: \"0439c056-347b-4a05-95aa-e85289754ecc\") " pod="openstack/nova-api-5f48-account-create-update-9428s" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.435042 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0439c056-347b-4a05-95aa-e85289754ecc-operator-scripts\") pod \"nova-api-5f48-account-create-update-9428s\" (UID: \"0439c056-347b-4a05-95aa-e85289754ecc\") " pod="openstack/nova-api-5f48-account-create-update-9428s" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.435076 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjfz9\" (UniqueName: \"kubernetes.io/projected/afb8dbce-6f68-4245-971e-e9087ed93cf1-kube-api-access-tjfz9\") pod \"nova-cell0-db-create-qdzrg\" (UID: \"afb8dbce-6f68-4245-971e-e9087ed93cf1\") " pod="openstack/nova-cell0-db-create-qdzrg" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.435301 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54f72\" (UniqueName: \"kubernetes.io/projected/7930a7dd-359c-4d6d-9a66-de8eaa5f6f60-kube-api-access-54f72\") pod \"nova-api-db-create-f788j\" (UID: \"7930a7dd-359c-4d6d-9a66-de8eaa5f6f60\") " pod="openstack/nova-api-db-create-f788j" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.435391 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7930a7dd-359c-4d6d-9a66-de8eaa5f6f60-operator-scripts\") pod \"nova-api-db-create-f788j\" (UID: \"7930a7dd-359c-4d6d-9a66-de8eaa5f6f60\") " pod="openstack/nova-api-db-create-f788j" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.435755 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afb8dbce-6f68-4245-971e-e9087ed93cf1-operator-scripts\") pod \"nova-cell0-db-create-qdzrg\" (UID: \"afb8dbce-6f68-4245-971e-e9087ed93cf1\") " pod="openstack/nova-cell0-db-create-qdzrg" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.437620 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7930a7dd-359c-4d6d-9a66-de8eaa5f6f60-operator-scripts\") pod \"nova-api-db-create-f788j\" (UID: \"7930a7dd-359c-4d6d-9a66-de8eaa5f6f60\") " pod="openstack/nova-api-db-create-f788j" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.475866 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54f72\" (UniqueName: \"kubernetes.io/projected/7930a7dd-359c-4d6d-9a66-de8eaa5f6f60-kube-api-access-54f72\") pod \"nova-api-db-create-f788j\" (UID: \"7930a7dd-359c-4d6d-9a66-de8eaa5f6f60\") " pod="openstack/nova-api-db-create-f788j" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.499800 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-2pbnm"] Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.503099 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2pbnm" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.522543 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-f788j" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.526437 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-2pbnm"] Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.535168 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-abd5-account-create-update-4jjrw"] Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.537070 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-abd5-account-create-update-4jjrw" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.537526 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afb8dbce-6f68-4245-971e-e9087ed93cf1-operator-scripts\") pod \"nova-cell0-db-create-qdzrg\" (UID: \"afb8dbce-6f68-4245-971e-e9087ed93cf1\") " pod="openstack/nova-cell0-db-create-qdzrg" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.537583 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv78d\" (UniqueName: \"kubernetes.io/projected/0439c056-347b-4a05-95aa-e85289754ecc-kube-api-access-nv78d\") pod \"nova-api-5f48-account-create-update-9428s\" (UID: \"0439c056-347b-4a05-95aa-e85289754ecc\") " pod="openstack/nova-api-5f48-account-create-update-9428s" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.537623 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc4rj\" (UniqueName: \"kubernetes.io/projected/20c6e0b9-0c53-43ea-a471-9076b51f877b-kube-api-access-qc4rj\") pod \"nova-cell1-db-create-2pbnm\" (UID: \"20c6e0b9-0c53-43ea-a471-9076b51f877b\") " pod="openstack/nova-cell1-db-create-2pbnm" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.537705 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20c6e0b9-0c53-43ea-a471-9076b51f877b-operator-scripts\") pod \"nova-cell1-db-create-2pbnm\" (UID: \"20c6e0b9-0c53-43ea-a471-9076b51f877b\") " pod="openstack/nova-cell1-db-create-2pbnm" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.537743 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0439c056-347b-4a05-95aa-e85289754ecc-operator-scripts\") pod \"nova-api-5f48-account-create-update-9428s\" (UID: \"0439c056-347b-4a05-95aa-e85289754ecc\") " pod="openstack/nova-api-5f48-account-create-update-9428s" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.538228 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjfz9\" (UniqueName: \"kubernetes.io/projected/afb8dbce-6f68-4245-971e-e9087ed93cf1-kube-api-access-tjfz9\") pod \"nova-cell0-db-create-qdzrg\" (UID: \"afb8dbce-6f68-4245-971e-e9087ed93cf1\") " pod="openstack/nova-cell0-db-create-qdzrg" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.539736 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.541587 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0439c056-347b-4a05-95aa-e85289754ecc-operator-scripts\") pod \"nova-api-5f48-account-create-update-9428s\" (UID: \"0439c056-347b-4a05-95aa-e85289754ecc\") " pod="openstack/nova-api-5f48-account-create-update-9428s" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.541893 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afb8dbce-6f68-4245-971e-e9087ed93cf1-operator-scripts\") pod \"nova-cell0-db-create-qdzrg\" (UID: \"afb8dbce-6f68-4245-971e-e9087ed93cf1\") " pod="openstack/nova-cell0-db-create-qdzrg" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.544607 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-abd5-account-create-update-4jjrw"] Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.581291 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv78d\" (UniqueName: \"kubernetes.io/projected/0439c056-347b-4a05-95aa-e85289754ecc-kube-api-access-nv78d\") pod \"nova-api-5f48-account-create-update-9428s\" (UID: \"0439c056-347b-4a05-95aa-e85289754ecc\") " pod="openstack/nova-api-5f48-account-create-update-9428s" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.583635 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjfz9\" (UniqueName: \"kubernetes.io/projected/afb8dbce-6f68-4245-971e-e9087ed93cf1-kube-api-access-tjfz9\") pod \"nova-cell0-db-create-qdzrg\" (UID: \"afb8dbce-6f68-4245-971e-e9087ed93cf1\") " pod="openstack/nova-cell0-db-create-qdzrg" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.623769 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qdzrg" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.676203 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5f48-account-create-update-9428s" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.669877 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/199828c4-e1bd-42a8-b35c-ba26f4c980b8-operator-scripts\") pod \"nova-cell0-abd5-account-create-update-4jjrw\" (UID: \"199828c4-e1bd-42a8-b35c-ba26f4c980b8\") " pod="openstack/nova-cell0-abd5-account-create-update-4jjrw" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.678507 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqj94\" (UniqueName: \"kubernetes.io/projected/199828c4-e1bd-42a8-b35c-ba26f4c980b8-kube-api-access-jqj94\") pod \"nova-cell0-abd5-account-create-update-4jjrw\" (UID: \"199828c4-e1bd-42a8-b35c-ba26f4c980b8\") " pod="openstack/nova-cell0-abd5-account-create-update-4jjrw" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.678604 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc4rj\" (UniqueName: \"kubernetes.io/projected/20c6e0b9-0c53-43ea-a471-9076b51f877b-kube-api-access-qc4rj\") pod \"nova-cell1-db-create-2pbnm\" (UID: \"20c6e0b9-0c53-43ea-a471-9076b51f877b\") " pod="openstack/nova-cell1-db-create-2pbnm" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.678739 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20c6e0b9-0c53-43ea-a471-9076b51f877b-operator-scripts\") pod \"nova-cell1-db-create-2pbnm\" (UID: \"20c6e0b9-0c53-43ea-a471-9076b51f877b\") " pod="openstack/nova-cell1-db-create-2pbnm" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.679680 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20c6e0b9-0c53-43ea-a471-9076b51f877b-operator-scripts\") pod \"nova-cell1-db-create-2pbnm\" (UID: \"20c6e0b9-0c53-43ea-a471-9076b51f877b\") " pod="openstack/nova-cell1-db-create-2pbnm" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.691799 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-e9c7-account-create-update-mmhk9"] Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.694229 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e9c7-account-create-update-mmhk9" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.696475 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.701401 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc4rj\" (UniqueName: \"kubernetes.io/projected/20c6e0b9-0c53-43ea-a471-9076b51f877b-kube-api-access-qc4rj\") pod \"nova-cell1-db-create-2pbnm\" (UID: \"20c6e0b9-0c53-43ea-a471-9076b51f877b\") " pod="openstack/nova-cell1-db-create-2pbnm" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.738423 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e9c7-account-create-update-mmhk9"] Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.780526 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3b29187-c6a2-4b88-9215-759fe3cb8dad-operator-scripts\") pod \"nova-cell1-e9c7-account-create-update-mmhk9\" (UID: \"d3b29187-c6a2-4b88-9215-759fe3cb8dad\") " pod="openstack/nova-cell1-e9c7-account-create-update-mmhk9" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.780621 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/199828c4-e1bd-42a8-b35c-ba26f4c980b8-operator-scripts\") pod \"nova-cell0-abd5-account-create-update-4jjrw\" (UID: \"199828c4-e1bd-42a8-b35c-ba26f4c980b8\") " pod="openstack/nova-cell0-abd5-account-create-update-4jjrw" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.780651 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqj94\" (UniqueName: \"kubernetes.io/projected/199828c4-e1bd-42a8-b35c-ba26f4c980b8-kube-api-access-jqj94\") pod \"nova-cell0-abd5-account-create-update-4jjrw\" (UID: \"199828c4-e1bd-42a8-b35c-ba26f4c980b8\") " pod="openstack/nova-cell0-abd5-account-create-update-4jjrw" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.780823 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zddqb\" (UniqueName: \"kubernetes.io/projected/d3b29187-c6a2-4b88-9215-759fe3cb8dad-kube-api-access-zddqb\") pod \"nova-cell1-e9c7-account-create-update-mmhk9\" (UID: \"d3b29187-c6a2-4b88-9215-759fe3cb8dad\") " pod="openstack/nova-cell1-e9c7-account-create-update-mmhk9" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.783236 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/199828c4-e1bd-42a8-b35c-ba26f4c980b8-operator-scripts\") pod \"nova-cell0-abd5-account-create-update-4jjrw\" (UID: \"199828c4-e1bd-42a8-b35c-ba26f4c980b8\") " pod="openstack/nova-cell0-abd5-account-create-update-4jjrw" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.804151 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqj94\" (UniqueName: \"kubernetes.io/projected/199828c4-e1bd-42a8-b35c-ba26f4c980b8-kube-api-access-jqj94\") pod \"nova-cell0-abd5-account-create-update-4jjrw\" (UID: \"199828c4-e1bd-42a8-b35c-ba26f4c980b8\") " pod="openstack/nova-cell0-abd5-account-create-update-4jjrw" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.882464 4823 generic.go:334] "Generic (PLEG): container finished" podID="b984559e-efdf-4d21-917f-420506f550da" containerID="45c2548ae54254ed1b411a8df02203fa9d6a360e80300e3ebd0ebb4d1550db82" exitCode=0 Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.882640 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dwpkn" event={"ID":"b984559e-efdf-4d21-917f-420506f550da","Type":"ContainerDied","Data":"45c2548ae54254ed1b411a8df02203fa9d6a360e80300e3ebd0ebb4d1550db82"} Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.883944 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zddqb\" (UniqueName: \"kubernetes.io/projected/d3b29187-c6a2-4b88-9215-759fe3cb8dad-kube-api-access-zddqb\") pod \"nova-cell1-e9c7-account-create-update-mmhk9\" (UID: \"d3b29187-c6a2-4b88-9215-759fe3cb8dad\") " pod="openstack/nova-cell1-e9c7-account-create-update-mmhk9" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.884053 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3b29187-c6a2-4b88-9215-759fe3cb8dad-operator-scripts\") pod \"nova-cell1-e9c7-account-create-update-mmhk9\" (UID: \"d3b29187-c6a2-4b88-9215-759fe3cb8dad\") " pod="openstack/nova-cell1-e9c7-account-create-update-mmhk9" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.885582 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3b29187-c6a2-4b88-9215-759fe3cb8dad-operator-scripts\") pod \"nova-cell1-e9c7-account-create-update-mmhk9\" (UID: \"d3b29187-c6a2-4b88-9215-759fe3cb8dad\") " pod="openstack/nova-cell1-e9c7-account-create-update-mmhk9" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.904181 4823 generic.go:334] "Generic (PLEG): container finished" podID="2bcc21a4-6b09-4804-86d5-85cc7f0267e7" containerID="ef7e20b45fa10c9e0534bf0e943c77a7024e8c1acb561998214014561bcd023a" exitCode=137 Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.904245 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cdc5bf4b4-qft5r" event={"ID":"2bcc21a4-6b09-4804-86d5-85cc7f0267e7","Type":"ContainerDied","Data":"ef7e20b45fa10c9e0534bf0e943c77a7024e8c1acb561998214014561bcd023a"} Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.910029 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zddqb\" (UniqueName: \"kubernetes.io/projected/d3b29187-c6a2-4b88-9215-759fe3cb8dad-kube-api-access-zddqb\") pod \"nova-cell1-e9c7-account-create-update-mmhk9\" (UID: \"d3b29187-c6a2-4b88-9215-759fe3cb8dad\") " pod="openstack/nova-cell1-e9c7-account-create-update-mmhk9" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.913135 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2pbnm" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.925577 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-abd5-account-create-update-4jjrw" Dec 06 06:48:07 crc kubenswrapper[4823]: I1206 06:48:07.941489 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e9c7-account-create-update-mmhk9" Dec 06 06:48:08 crc kubenswrapper[4823]: I1206 06:48:08.793048 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d5795f4fd-qb9w4" podUID="9bceb03c-e3de-4bfd-b163-69cde861ce00" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.169:9311/healthcheck\": dial tcp 10.217.0.169:9311: connect: connection refused" Dec 06 06:48:08 crc kubenswrapper[4823]: I1206 06:48:08.793205 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d5795f4fd-qb9w4" podUID="9bceb03c-e3de-4bfd-b163-69cde861ce00" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.169:9311/healthcheck\": dial tcp 10.217.0.169:9311: connect: connection refused" Dec 06 06:48:08 crc kubenswrapper[4823]: I1206 06:48:08.887741 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cdc5bf4b4-qft5r" podUID="2bcc21a4-6b09-4804-86d5-85cc7f0267e7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.157:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.157:8443: connect: connection refused" Dec 06 06:48:08 crc kubenswrapper[4823]: I1206 06:48:08.925650 4823 generic.go:334] "Generic (PLEG): container finished" podID="9bceb03c-e3de-4bfd-b163-69cde861ce00" containerID="9184cb05953d1a426eb45a5642e06bd4b7651e1b2717dc0488c41a48546bc559" exitCode=0 Dec 06 06:48:08 crc kubenswrapper[4823]: I1206 06:48:08.925742 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d5795f4fd-qb9w4" event={"ID":"9bceb03c-e3de-4bfd-b163-69cde861ce00","Type":"ContainerDied","Data":"9184cb05953d1a426eb45a5642e06bd4b7651e1b2717dc0488c41a48546bc559"} Dec 06 06:48:10 crc kubenswrapper[4823]: I1206 06:48:10.049605 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.289277 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6bcdffb5bf-b97n9"] Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.291150 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.297234 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.297320 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.297475 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.310356 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6bcdffb5bf-b97n9"] Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.384903 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58b74f3f-7d40-4aae-a70c-95ff51beca50-run-httpd\") pod \"swift-proxy-6bcdffb5bf-b97n9\" (UID: \"58b74f3f-7d40-4aae-a70c-95ff51beca50\") " pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.384980 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b74f3f-7d40-4aae-a70c-95ff51beca50-config-data\") pod \"swift-proxy-6bcdffb5bf-b97n9\" (UID: \"58b74f3f-7d40-4aae-a70c-95ff51beca50\") " pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.385019 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfxxx\" (UniqueName: \"kubernetes.io/projected/58b74f3f-7d40-4aae-a70c-95ff51beca50-kube-api-access-lfxxx\") pod \"swift-proxy-6bcdffb5bf-b97n9\" (UID: \"58b74f3f-7d40-4aae-a70c-95ff51beca50\") " pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.385068 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/58b74f3f-7d40-4aae-a70c-95ff51beca50-etc-swift\") pod \"swift-proxy-6bcdffb5bf-b97n9\" (UID: \"58b74f3f-7d40-4aae-a70c-95ff51beca50\") " pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.385166 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58b74f3f-7d40-4aae-a70c-95ff51beca50-public-tls-certs\") pod \"swift-proxy-6bcdffb5bf-b97n9\" (UID: \"58b74f3f-7d40-4aae-a70c-95ff51beca50\") " pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.385251 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58b74f3f-7d40-4aae-a70c-95ff51beca50-log-httpd\") pod \"swift-proxy-6bcdffb5bf-b97n9\" (UID: \"58b74f3f-7d40-4aae-a70c-95ff51beca50\") " pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.385277 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58b74f3f-7d40-4aae-a70c-95ff51beca50-internal-tls-certs\") pod \"swift-proxy-6bcdffb5bf-b97n9\" (UID: \"58b74f3f-7d40-4aae-a70c-95ff51beca50\") " pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.385456 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b74f3f-7d40-4aae-a70c-95ff51beca50-combined-ca-bundle\") pod \"swift-proxy-6bcdffb5bf-b97n9\" (UID: \"58b74f3f-7d40-4aae-a70c-95ff51beca50\") " pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.487190 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58b74f3f-7d40-4aae-a70c-95ff51beca50-run-httpd\") pod \"swift-proxy-6bcdffb5bf-b97n9\" (UID: \"58b74f3f-7d40-4aae-a70c-95ff51beca50\") " pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.487264 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b74f3f-7d40-4aae-a70c-95ff51beca50-config-data\") pod \"swift-proxy-6bcdffb5bf-b97n9\" (UID: \"58b74f3f-7d40-4aae-a70c-95ff51beca50\") " pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.487298 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfxxx\" (UniqueName: \"kubernetes.io/projected/58b74f3f-7d40-4aae-a70c-95ff51beca50-kube-api-access-lfxxx\") pod \"swift-proxy-6bcdffb5bf-b97n9\" (UID: \"58b74f3f-7d40-4aae-a70c-95ff51beca50\") " pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.487335 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/58b74f3f-7d40-4aae-a70c-95ff51beca50-etc-swift\") pod \"swift-proxy-6bcdffb5bf-b97n9\" (UID: \"58b74f3f-7d40-4aae-a70c-95ff51beca50\") " pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.487382 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58b74f3f-7d40-4aae-a70c-95ff51beca50-public-tls-certs\") pod \"swift-proxy-6bcdffb5bf-b97n9\" (UID: \"58b74f3f-7d40-4aae-a70c-95ff51beca50\") " pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.487434 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58b74f3f-7d40-4aae-a70c-95ff51beca50-log-httpd\") pod \"swift-proxy-6bcdffb5bf-b97n9\" (UID: \"58b74f3f-7d40-4aae-a70c-95ff51beca50\") " pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.487457 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58b74f3f-7d40-4aae-a70c-95ff51beca50-internal-tls-certs\") pod \"swift-proxy-6bcdffb5bf-b97n9\" (UID: \"58b74f3f-7d40-4aae-a70c-95ff51beca50\") " pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.487482 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b74f3f-7d40-4aae-a70c-95ff51beca50-combined-ca-bundle\") pod \"swift-proxy-6bcdffb5bf-b97n9\" (UID: \"58b74f3f-7d40-4aae-a70c-95ff51beca50\") " pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.489283 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58b74f3f-7d40-4aae-a70c-95ff51beca50-log-httpd\") pod \"swift-proxy-6bcdffb5bf-b97n9\" (UID: \"58b74f3f-7d40-4aae-a70c-95ff51beca50\") " pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.489392 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58b74f3f-7d40-4aae-a70c-95ff51beca50-run-httpd\") pod \"swift-proxy-6bcdffb5bf-b97n9\" (UID: \"58b74f3f-7d40-4aae-a70c-95ff51beca50\") " pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.496460 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58b74f3f-7d40-4aae-a70c-95ff51beca50-public-tls-certs\") pod \"swift-proxy-6bcdffb5bf-b97n9\" (UID: \"58b74f3f-7d40-4aae-a70c-95ff51beca50\") " pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.501575 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b74f3f-7d40-4aae-a70c-95ff51beca50-config-data\") pod \"swift-proxy-6bcdffb5bf-b97n9\" (UID: \"58b74f3f-7d40-4aae-a70c-95ff51beca50\") " pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.506651 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b74f3f-7d40-4aae-a70c-95ff51beca50-combined-ca-bundle\") pod \"swift-proxy-6bcdffb5bf-b97n9\" (UID: \"58b74f3f-7d40-4aae-a70c-95ff51beca50\") " pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.510860 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/58b74f3f-7d40-4aae-a70c-95ff51beca50-etc-swift\") pod \"swift-proxy-6bcdffb5bf-b97n9\" (UID: \"58b74f3f-7d40-4aae-a70c-95ff51beca50\") " pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.513484 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58b74f3f-7d40-4aae-a70c-95ff51beca50-internal-tls-certs\") pod \"swift-proxy-6bcdffb5bf-b97n9\" (UID: \"58b74f3f-7d40-4aae-a70c-95ff51beca50\") " pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.524566 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfxxx\" (UniqueName: \"kubernetes.io/projected/58b74f3f-7d40-4aae-a70c-95ff51beca50-kube-api-access-lfxxx\") pod \"swift-proxy-6bcdffb5bf-b97n9\" (UID: \"58b74f3f-7d40-4aae-a70c-95ff51beca50\") " pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:11 crc kubenswrapper[4823]: I1206 06:48:11.619000 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:12 crc kubenswrapper[4823]: I1206 06:48:12.140450 4823 scope.go:117] "RemoveContainer" containerID="e331f4421044ffd6bb90b95a39cce22e9c826aec0947cf1211ff68f01deaa4f1" Dec 06 06:48:13 crc kubenswrapper[4823]: I1206 06:48:13.299088 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:48:13 crc kubenswrapper[4823]: I1206 06:48:13.299897 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="335c336e-79ff-426e-a360-0c0ea58e8941" containerName="sg-core" containerID="cri-o://30f6cedc50388165455a9dca871dcfdce66dfdf8313c8dd60f032bea7953b0f9" gracePeriod=30 Dec 06 06:48:13 crc kubenswrapper[4823]: I1206 06:48:13.299902 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="335c336e-79ff-426e-a360-0c0ea58e8941" containerName="proxy-httpd" containerID="cri-o://41aeecb1f5c3dec1c632880310f9cd74af0481cd9e260f9abb46d0bf63e3a807" gracePeriod=30 Dec 06 06:48:13 crc kubenswrapper[4823]: I1206 06:48:13.299904 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="335c336e-79ff-426e-a360-0c0ea58e8941" containerName="ceilometer-notification-agent" containerID="cri-o://b92e557dd1017d4367e6dc8cb1e3339d81cc13eabe4589addaceb147bdfbc84a" gracePeriod=30 Dec 06 06:48:13 crc kubenswrapper[4823]: I1206 06:48:13.300123 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="335c336e-79ff-426e-a360-0c0ea58e8941" containerName="ceilometer-central-agent" containerID="cri-o://4ceef2af5cfaed2f862e81044aad73dbeef6768c74f17686033db1f69f407650" gracePeriod=30 Dec 06 06:48:13 crc kubenswrapper[4823]: I1206 06:48:13.793107 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d5795f4fd-qb9w4" podUID="9bceb03c-e3de-4bfd-b163-69cde861ce00" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.169:9311/healthcheck\": dial tcp 10.217.0.169:9311: connect: connection refused" Dec 06 06:48:13 crc kubenswrapper[4823]: I1206 06:48:13.793232 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d5795f4fd-qb9w4" podUID="9bceb03c-e3de-4bfd-b163-69cde861ce00" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.169:9311/healthcheck\": dial tcp 10.217.0.169:9311: connect: connection refused" Dec 06 06:48:13 crc kubenswrapper[4823]: I1206 06:48:13.985911 4823 generic.go:334] "Generic (PLEG): container finished" podID="335c336e-79ff-426e-a360-0c0ea58e8941" containerID="41aeecb1f5c3dec1c632880310f9cd74af0481cd9e260f9abb46d0bf63e3a807" exitCode=0 Dec 06 06:48:13 crc kubenswrapper[4823]: I1206 06:48:13.986330 4823 generic.go:334] "Generic (PLEG): container finished" podID="335c336e-79ff-426e-a360-0c0ea58e8941" containerID="30f6cedc50388165455a9dca871dcfdce66dfdf8313c8dd60f032bea7953b0f9" exitCode=2 Dec 06 06:48:13 crc kubenswrapper[4823]: I1206 06:48:13.986367 4823 generic.go:334] "Generic (PLEG): container finished" podID="335c336e-79ff-426e-a360-0c0ea58e8941" containerID="4ceef2af5cfaed2f862e81044aad73dbeef6768c74f17686033db1f69f407650" exitCode=0 Dec 06 06:48:13 crc kubenswrapper[4823]: I1206 06:48:13.985956 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"335c336e-79ff-426e-a360-0c0ea58e8941","Type":"ContainerDied","Data":"41aeecb1f5c3dec1c632880310f9cd74af0481cd9e260f9abb46d0bf63e3a807"} Dec 06 06:48:13 crc kubenswrapper[4823]: I1206 06:48:13.986460 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"335c336e-79ff-426e-a360-0c0ea58e8941","Type":"ContainerDied","Data":"30f6cedc50388165455a9dca871dcfdce66dfdf8313c8dd60f032bea7953b0f9"} Dec 06 06:48:13 crc kubenswrapper[4823]: I1206 06:48:13.986486 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"335c336e-79ff-426e-a360-0c0ea58e8941","Type":"ContainerDied","Data":"4ceef2af5cfaed2f862e81044aad73dbeef6768c74f17686033db1f69f407650"} Dec 06 06:48:15 crc kubenswrapper[4823]: E1206 06:48:15.148460 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-openstackclient:watcher_latest" Dec 06 06:48:15 crc kubenswrapper[4823]: E1206 06:48:15.148544 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-openstackclient:watcher_latest" Dec 06 06:48:15 crc kubenswrapper[4823]: E1206 06:48:15.148753 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:38.102.83.174:5001/podified-master-centos10/openstack-openstackclient:watcher_latest,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n56fh57h649h5f8hc6h564h698h566h99hd5h655hd8h555h96h585h64hb6h678h577h6hc4h5c5hd6h575h8h65fh6fhcfh5d9h9ch66bh5d9q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w2l7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(bc03fdf8-c76b-4330-b7fb-58142df075c3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:48:15 crc kubenswrapper[4823]: E1206 06:48:15.149950 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="bc03fdf8-c76b-4330-b7fb-58142df075c3" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.370478 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9fd7r" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.371938 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dwpkn" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.581892 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6qtl\" (UniqueName: \"kubernetes.io/projected/f5301842-d5df-4df6-8699-56f86789df64-kube-api-access-b6qtl\") pod \"f5301842-d5df-4df6-8699-56f86789df64\" (UID: \"f5301842-d5df-4df6-8699-56f86789df64\") " Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.582647 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f5301842-d5df-4df6-8699-56f86789df64-db-sync-config-data\") pod \"f5301842-d5df-4df6-8699-56f86789df64\" (UID: \"f5301842-d5df-4df6-8699-56f86789df64\") " Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.584037 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td9sg\" (UniqueName: \"kubernetes.io/projected/b984559e-efdf-4d21-917f-420506f550da-kube-api-access-td9sg\") pod \"b984559e-efdf-4d21-917f-420506f550da\" (UID: \"b984559e-efdf-4d21-917f-420506f550da\") " Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.584174 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b984559e-efdf-4d21-917f-420506f550da-config\") pod \"b984559e-efdf-4d21-917f-420506f550da\" (UID: \"b984559e-efdf-4d21-917f-420506f550da\") " Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.594023 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b984559e-efdf-4d21-917f-420506f550da-combined-ca-bundle\") pod \"b984559e-efdf-4d21-917f-420506f550da\" (UID: \"b984559e-efdf-4d21-917f-420506f550da\") " Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.594334 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5301842-d5df-4df6-8699-56f86789df64-combined-ca-bundle\") pod \"f5301842-d5df-4df6-8699-56f86789df64\" (UID: \"f5301842-d5df-4df6-8699-56f86789df64\") " Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.594471 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5301842-d5df-4df6-8699-56f86789df64-config-data\") pod \"f5301842-d5df-4df6-8699-56f86789df64\" (UID: \"f5301842-d5df-4df6-8699-56f86789df64\") " Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.592494 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5301842-d5df-4df6-8699-56f86789df64-kube-api-access-b6qtl" (OuterVolumeSpecName: "kube-api-access-b6qtl") pod "f5301842-d5df-4df6-8699-56f86789df64" (UID: "f5301842-d5df-4df6-8699-56f86789df64"). InnerVolumeSpecName "kube-api-access-b6qtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.601126 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6qtl\" (UniqueName: \"kubernetes.io/projected/f5301842-d5df-4df6-8699-56f86789df64-kube-api-access-b6qtl\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.604194 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b984559e-efdf-4d21-917f-420506f550da-kube-api-access-td9sg" (OuterVolumeSpecName: "kube-api-access-td9sg") pod "b984559e-efdf-4d21-917f-420506f550da" (UID: "b984559e-efdf-4d21-917f-420506f550da"). InnerVolumeSpecName "kube-api-access-td9sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.611095 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l6rbl"] Dec 06 06:48:15 crc kubenswrapper[4823]: E1206 06:48:15.611717 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b984559e-efdf-4d21-917f-420506f550da" containerName="neutron-db-sync" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.611736 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b984559e-efdf-4d21-917f-420506f550da" containerName="neutron-db-sync" Dec 06 06:48:15 crc kubenswrapper[4823]: E1206 06:48:15.611804 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5301842-d5df-4df6-8699-56f86789df64" containerName="glance-db-sync" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.611814 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5301842-d5df-4df6-8699-56f86789df64" containerName="glance-db-sync" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.612069 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b984559e-efdf-4d21-917f-420506f550da" containerName="neutron-db-sync" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.612111 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5301842-d5df-4df6-8699-56f86789df64" containerName="glance-db-sync" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.614815 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l6rbl" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.635265 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5301842-d5df-4df6-8699-56f86789df64-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f5301842-d5df-4df6-8699-56f86789df64" (UID: "f5301842-d5df-4df6-8699-56f86789df64"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.732606 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06a9a9aa-962e-4cf1-afe9-b831a56f3837-catalog-content\") pod \"redhat-operators-l6rbl\" (UID: \"06a9a9aa-962e-4cf1-afe9-b831a56f3837\") " pod="openshift-marketplace/redhat-operators-l6rbl" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.732719 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06a9a9aa-962e-4cf1-afe9-b831a56f3837-utilities\") pod \"redhat-operators-l6rbl\" (UID: \"06a9a9aa-962e-4cf1-afe9-b831a56f3837\") " pod="openshift-marketplace/redhat-operators-l6rbl" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.739699 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffmv5\" (UniqueName: \"kubernetes.io/projected/06a9a9aa-962e-4cf1-afe9-b831a56f3837-kube-api-access-ffmv5\") pod \"redhat-operators-l6rbl\" (UID: \"06a9a9aa-962e-4cf1-afe9-b831a56f3837\") " pod="openshift-marketplace/redhat-operators-l6rbl" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.739948 4823 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f5301842-d5df-4df6-8699-56f86789df64-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.739967 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td9sg\" (UniqueName: \"kubernetes.io/projected/b984559e-efdf-4d21-917f-420506f550da-kube-api-access-td9sg\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.732820 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l6rbl"] Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.735677 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b984559e-efdf-4d21-917f-420506f550da-config" (OuterVolumeSpecName: "config") pod "b984559e-efdf-4d21-917f-420506f550da" (UID: "b984559e-efdf-4d21-917f-420506f550da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.777247 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5301842-d5df-4df6-8699-56f86789df64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5301842-d5df-4df6-8699-56f86789df64" (UID: "f5301842-d5df-4df6-8699-56f86789df64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.817313 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b984559e-efdf-4d21-917f-420506f550da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b984559e-efdf-4d21-917f-420506f550da" (UID: "b984559e-efdf-4d21-917f-420506f550da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.819836 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5301842-d5df-4df6-8699-56f86789df64-config-data" (OuterVolumeSpecName: "config-data") pod "f5301842-d5df-4df6-8699-56f86789df64" (UID: "f5301842-d5df-4df6-8699-56f86789df64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.841533 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06a9a9aa-962e-4cf1-afe9-b831a56f3837-catalog-content\") pod \"redhat-operators-l6rbl\" (UID: \"06a9a9aa-962e-4cf1-afe9-b831a56f3837\") " pod="openshift-marketplace/redhat-operators-l6rbl" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.841611 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06a9a9aa-962e-4cf1-afe9-b831a56f3837-utilities\") pod \"redhat-operators-l6rbl\" (UID: \"06a9a9aa-962e-4cf1-afe9-b831a56f3837\") " pod="openshift-marketplace/redhat-operators-l6rbl" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.841637 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffmv5\" (UniqueName: \"kubernetes.io/projected/06a9a9aa-962e-4cf1-afe9-b831a56f3837-kube-api-access-ffmv5\") pod \"redhat-operators-l6rbl\" (UID: \"06a9a9aa-962e-4cf1-afe9-b831a56f3837\") " pod="openshift-marketplace/redhat-operators-l6rbl" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.842118 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06a9a9aa-962e-4cf1-afe9-b831a56f3837-catalog-content\") pod \"redhat-operators-l6rbl\" (UID: \"06a9a9aa-962e-4cf1-afe9-b831a56f3837\") " pod="openshift-marketplace/redhat-operators-l6rbl" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.842611 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b984559e-efdf-4d21-917f-420506f550da-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.842646 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b984559e-efdf-4d21-917f-420506f550da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.842685 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5301842-d5df-4df6-8699-56f86789df64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.842699 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5301842-d5df-4df6-8699-56f86789df64-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.842728 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06a9a9aa-962e-4cf1-afe9-b831a56f3837-utilities\") pod \"redhat-operators-l6rbl\" (UID: \"06a9a9aa-962e-4cf1-afe9-b831a56f3837\") " pod="openshift-marketplace/redhat-operators-l6rbl" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.867087 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffmv5\" (UniqueName: \"kubernetes.io/projected/06a9a9aa-962e-4cf1-afe9-b831a56f3837-kube-api-access-ffmv5\") pod \"redhat-operators-l6rbl\" (UID: \"06a9a9aa-962e-4cf1-afe9-b831a56f3837\") " pod="openshift-marketplace/redhat-operators-l6rbl" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.891787 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d5795f4fd-qb9w4" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.943886 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9bceb03c-e3de-4bfd-b163-69cde861ce00-config-data-custom\") pod \"9bceb03c-e3de-4bfd-b163-69cde861ce00\" (UID: \"9bceb03c-e3de-4bfd-b163-69cde861ce00\") " Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.944078 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bceb03c-e3de-4bfd-b163-69cde861ce00-combined-ca-bundle\") pod \"9bceb03c-e3de-4bfd-b163-69cde861ce00\" (UID: \"9bceb03c-e3de-4bfd-b163-69cde861ce00\") " Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.944130 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bceb03c-e3de-4bfd-b163-69cde861ce00-logs\") pod \"9bceb03c-e3de-4bfd-b163-69cde861ce00\" (UID: \"9bceb03c-e3de-4bfd-b163-69cde861ce00\") " Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.944216 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bceb03c-e3de-4bfd-b163-69cde861ce00-config-data\") pod \"9bceb03c-e3de-4bfd-b163-69cde861ce00\" (UID: \"9bceb03c-e3de-4bfd-b163-69cde861ce00\") " Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.944294 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6whr\" (UniqueName: \"kubernetes.io/projected/9bceb03c-e3de-4bfd-b163-69cde861ce00-kube-api-access-n6whr\") pod \"9bceb03c-e3de-4bfd-b163-69cde861ce00\" (UID: \"9bceb03c-e3de-4bfd-b163-69cde861ce00\") " Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.945481 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bceb03c-e3de-4bfd-b163-69cde861ce00-logs" (OuterVolumeSpecName: "logs") pod "9bceb03c-e3de-4bfd-b163-69cde861ce00" (UID: "9bceb03c-e3de-4bfd-b163-69cde861ce00"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.948820 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bceb03c-e3de-4bfd-b163-69cde861ce00-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9bceb03c-e3de-4bfd-b163-69cde861ce00" (UID: "9bceb03c-e3de-4bfd-b163-69cde861ce00"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.949247 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bceb03c-e3de-4bfd-b163-69cde861ce00-kube-api-access-n6whr" (OuterVolumeSpecName: "kube-api-access-n6whr") pod "9bceb03c-e3de-4bfd-b163-69cde861ce00" (UID: "9bceb03c-e3de-4bfd-b163-69cde861ce00"). InnerVolumeSpecName "kube-api-access-n6whr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:48:15 crc kubenswrapper[4823]: I1206 06:48:15.981445 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bceb03c-e3de-4bfd-b163-69cde861ce00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9bceb03c-e3de-4bfd-b163-69cde861ce00" (UID: "9bceb03c-e3de-4bfd-b163-69cde861ce00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.038697 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d5795f4fd-qb9w4" event={"ID":"9bceb03c-e3de-4bfd-b163-69cde861ce00","Type":"ContainerDied","Data":"20cecb6fd82cdb62a2972286e41902a0441925ebfcb6adafb66187caa0f6c193"} Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.038779 4823 scope.go:117] "RemoveContainer" containerID="9184cb05953d1a426eb45a5642e06bd4b7651e1b2717dc0488c41a48546bc559" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.038990 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d5795f4fd-qb9w4" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.040425 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l6rbl" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.045544 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dwpkn" event={"ID":"b984559e-efdf-4d21-917f-420506f550da","Type":"ContainerDied","Data":"a535cee0854520719889f2127987bd4c13c061da8068e383c2f7a2adb6cd1b36"} Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.045580 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a535cee0854520719889f2127987bd4c13c061da8068e383c2f7a2adb6cd1b36" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.045638 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dwpkn" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.049323 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bceb03c-e3de-4bfd-b163-69cde861ce00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.050384 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bceb03c-e3de-4bfd-b163-69cde861ce00-logs\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.050464 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6whr\" (UniqueName: \"kubernetes.io/projected/9bceb03c-e3de-4bfd-b163-69cde861ce00-kube-api-access-n6whr\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.050484 4823 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9bceb03c-e3de-4bfd-b163-69cde861ce00-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.079404 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9fd7r" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.080385 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9fd7r" event={"ID":"f5301842-d5df-4df6-8699-56f86789df64","Type":"ContainerDied","Data":"9055b8e4bc5d1b5fa577d5d423c1c5a25638f38458794eefb8ae7e987f36388e"} Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.080438 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9055b8e4bc5d1b5fa577d5d423c1c5a25638f38458794eefb8ae7e987f36388e" Dec 06 06:48:16 crc kubenswrapper[4823]: E1206 06:48:16.082612 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.174:5001/podified-master-centos10/openstack-openstackclient:watcher_latest\\\"\"" pod="openstack/openstackclient" podUID="bc03fdf8-c76b-4330-b7fb-58142df075c3" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.104309 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bceb03c-e3de-4bfd-b163-69cde861ce00-config-data" (OuterVolumeSpecName: "config-data") pod "9bceb03c-e3de-4bfd-b163-69cde861ce00" (UID: "9bceb03c-e3de-4bfd-b163-69cde861ce00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.112637 4823 scope.go:117] "RemoveContainer" containerID="ca4533920163b383473cd1971a80d407da0e40d88b9bc268c6b16e0d655056e1" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.157756 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bceb03c-e3de-4bfd-b163-69cde861ce00-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.480136 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.503035 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5d5795f4fd-qb9w4"] Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.515421 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.519135 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5d5795f4fd-qb9w4"] Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.571997 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjp9t\" (UniqueName: \"kubernetes.io/projected/6aa2f666-1f1f-4930-a728-81b27d74a0f8-kube-api-access-fjp9t\") pod \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\" (UID: \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\") " Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.572096 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-config-data\") pod \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.572121 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6aa2f666-1f1f-4930-a728-81b27d74a0f8-scripts\") pod \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\" (UID: \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\") " Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.572166 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6aa2f666-1f1f-4930-a728-81b27d74a0f8-config-data-custom\") pod \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\" (UID: \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\") " Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.572205 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-logs\") pod \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.572265 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-horizon-tls-certs\") pod \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.572345 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-scripts\") pod \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.572427 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-horizon-secret-key\") pod \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.572523 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa2f666-1f1f-4930-a728-81b27d74a0f8-combined-ca-bundle\") pod \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\" (UID: \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\") " Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.572549 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq2fv\" (UniqueName: \"kubernetes.io/projected/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-kube-api-access-dq2fv\") pod \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.572571 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-combined-ca-bundle\") pod \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\" (UID: \"2bcc21a4-6b09-4804-86d5-85cc7f0267e7\") " Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.572625 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aa2f666-1f1f-4930-a728-81b27d74a0f8-config-data\") pod \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\" (UID: \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\") " Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.572651 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6aa2f666-1f1f-4930-a728-81b27d74a0f8-etc-machine-id\") pod \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\" (UID: \"6aa2f666-1f1f-4930-a728-81b27d74a0f8\") " Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.577042 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6aa2f666-1f1f-4930-a728-81b27d74a0f8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6aa2f666-1f1f-4930-a728-81b27d74a0f8" (UID: "6aa2f666-1f1f-4930-a728-81b27d74a0f8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.579810 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-logs" (OuterVolumeSpecName: "logs") pod "2bcc21a4-6b09-4804-86d5-85cc7f0267e7" (UID: "2bcc21a4-6b09-4804-86d5-85cc7f0267e7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.582305 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-kube-api-access-dq2fv" (OuterVolumeSpecName: "kube-api-access-dq2fv") pod "2bcc21a4-6b09-4804-86d5-85cc7f0267e7" (UID: "2bcc21a4-6b09-4804-86d5-85cc7f0267e7"). InnerVolumeSpecName "kube-api-access-dq2fv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.594810 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aa2f666-1f1f-4930-a728-81b27d74a0f8-kube-api-access-fjp9t" (OuterVolumeSpecName: "kube-api-access-fjp9t") pod "6aa2f666-1f1f-4930-a728-81b27d74a0f8" (UID: "6aa2f666-1f1f-4930-a728-81b27d74a0f8"). InnerVolumeSpecName "kube-api-access-fjp9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.594930 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aa2f666-1f1f-4930-a728-81b27d74a0f8-scripts" (OuterVolumeSpecName: "scripts") pod "6aa2f666-1f1f-4930-a728-81b27d74a0f8" (UID: "6aa2f666-1f1f-4930-a728-81b27d74a0f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.601389 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2bcc21a4-6b09-4804-86d5-85cc7f0267e7" (UID: "2bcc21a4-6b09-4804-86d5-85cc7f0267e7"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.601908 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aa2f666-1f1f-4930-a728-81b27d74a0f8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6aa2f666-1f1f-4930-a728-81b27d74a0f8" (UID: "6aa2f666-1f1f-4930-a728-81b27d74a0f8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.663794 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e9c7-account-create-update-mmhk9"] Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.678423 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-scripts" (OuterVolumeSpecName: "scripts") pod "2bcc21a4-6b09-4804-86d5-85cc7f0267e7" (UID: "2bcc21a4-6b09-4804-86d5-85cc7f0267e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.713906 4823 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.713945 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq2fv\" (UniqueName: \"kubernetes.io/projected/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-kube-api-access-dq2fv\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.713960 4823 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6aa2f666-1f1f-4930-a728-81b27d74a0f8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.713974 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjp9t\" (UniqueName: \"kubernetes.io/projected/6aa2f666-1f1f-4930-a728-81b27d74a0f8-kube-api-access-fjp9t\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.713987 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6aa2f666-1f1f-4930-a728-81b27d74a0f8-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.713998 4823 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6aa2f666-1f1f-4930-a728-81b27d74a0f8-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.714011 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-logs\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.714022 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.771874 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aa2f666-1f1f-4930-a728-81b27d74a0f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6aa2f666-1f1f-4930-a728-81b27d74a0f8" (UID: "6aa2f666-1f1f-4930-a728-81b27d74a0f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.775768 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-587544c9cf-w6grf"] Dec 06 06:48:16 crc kubenswrapper[4823]: E1206 06:48:16.776390 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bcc21a4-6b09-4804-86d5-85cc7f0267e7" containerName="horizon-log" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.776417 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bcc21a4-6b09-4804-86d5-85cc7f0267e7" containerName="horizon-log" Dec 06 06:48:16 crc kubenswrapper[4823]: E1206 06:48:16.776442 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bceb03c-e3de-4bfd-b163-69cde861ce00" containerName="barbican-api-log" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.776450 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bceb03c-e3de-4bfd-b163-69cde861ce00" containerName="barbican-api-log" Dec 06 06:48:16 crc kubenswrapper[4823]: E1206 06:48:16.776575 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa2f666-1f1f-4930-a728-81b27d74a0f8" containerName="cinder-scheduler" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.776589 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa2f666-1f1f-4930-a728-81b27d74a0f8" containerName="cinder-scheduler" Dec 06 06:48:16 crc kubenswrapper[4823]: E1206 06:48:16.776618 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bcc21a4-6b09-4804-86d5-85cc7f0267e7" containerName="horizon" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.776628 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bcc21a4-6b09-4804-86d5-85cc7f0267e7" containerName="horizon" Dec 06 06:48:16 crc kubenswrapper[4823]: E1206 06:48:16.776658 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bceb03c-e3de-4bfd-b163-69cde861ce00" containerName="barbican-api" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.776715 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bceb03c-e3de-4bfd-b163-69cde861ce00" containerName="barbican-api" Dec 06 06:48:16 crc kubenswrapper[4823]: E1206 06:48:16.776734 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa2f666-1f1f-4930-a728-81b27d74a0f8" containerName="probe" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.776742 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa2f666-1f1f-4930-a728-81b27d74a0f8" containerName="probe" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.776977 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bceb03c-e3de-4bfd-b163-69cde861ce00" containerName="barbican-api-log" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.776999 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bcc21a4-6b09-4804-86d5-85cc7f0267e7" containerName="horizon" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.777021 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bcc21a4-6b09-4804-86d5-85cc7f0267e7" containerName="horizon-log" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.777038 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aa2f666-1f1f-4930-a728-81b27d74a0f8" containerName="cinder-scheduler" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.777048 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bceb03c-e3de-4bfd-b163-69cde861ce00" containerName="barbican-api" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.777065 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aa2f666-1f1f-4930-a728-81b27d74a0f8" containerName="probe" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.789372 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-587544c9cf-w6grf" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.800498 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dc699456d-7slk7"] Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.802268 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dc699456d-7slk7" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.847219 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2bcc21a4-6b09-4804-86d5-85cc7f0267e7" (UID: "2bcc21a4-6b09-4804-86d5-85cc7f0267e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.848097 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.848406 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.848614 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.848838 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-jp6tc" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.855730 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-ovsdbserver-sb\") pod \"dnsmasq-dns-587544c9cf-w6grf\" (UID: \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\") " pod="openstack/dnsmasq-dns-587544c9cf-w6grf" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.860564 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-ovsdbserver-nb\") pod \"dnsmasq-dns-587544c9cf-w6grf\" (UID: \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\") " pod="openstack/dnsmasq-dns-587544c9cf-w6grf" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.860902 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-dns-swift-storage-0\") pod \"dnsmasq-dns-587544c9cf-w6grf\" (UID: \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\") " pod="openstack/dnsmasq-dns-587544c9cf-w6grf" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.860970 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-config\") pod \"dnsmasq-dns-587544c9cf-w6grf\" (UID: \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\") " pod="openstack/dnsmasq-dns-587544c9cf-w6grf" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.861087 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-dns-svc\") pod \"dnsmasq-dns-587544c9cf-w6grf\" (UID: \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\") " pod="openstack/dnsmasq-dns-587544c9cf-w6grf" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.861508 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs2fg\" (UniqueName: \"kubernetes.io/projected/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-kube-api-access-qs2fg\") pod \"dnsmasq-dns-587544c9cf-w6grf\" (UID: \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\") " pod="openstack/dnsmasq-dns-587544c9cf-w6grf" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.864440 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa2f666-1f1f-4930-a728-81b27d74a0f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.864467 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.968041 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-config\") pod \"dnsmasq-dns-587544c9cf-w6grf\" (UID: \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\") " pod="openstack/dnsmasq-dns-587544c9cf-w6grf" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.968351 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/15372216-fc04-44d8-8268-7dbf3b74eeb7-httpd-config\") pod \"neutron-dc699456d-7slk7\" (UID: \"15372216-fc04-44d8-8268-7dbf3b74eeb7\") " pod="openstack/neutron-dc699456d-7slk7" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.968501 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-dns-svc\") pod \"dnsmasq-dns-587544c9cf-w6grf\" (UID: \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\") " pod="openstack/dnsmasq-dns-587544c9cf-w6grf" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.968645 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m48ts\" (UniqueName: \"kubernetes.io/projected/15372216-fc04-44d8-8268-7dbf3b74eeb7-kube-api-access-m48ts\") pod \"neutron-dc699456d-7slk7\" (UID: \"15372216-fc04-44d8-8268-7dbf3b74eeb7\") " pod="openstack/neutron-dc699456d-7slk7" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.968791 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs2fg\" (UniqueName: \"kubernetes.io/projected/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-kube-api-access-qs2fg\") pod \"dnsmasq-dns-587544c9cf-w6grf\" (UID: \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\") " pod="openstack/dnsmasq-dns-587544c9cf-w6grf" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.968914 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-ovsdbserver-sb\") pod \"dnsmasq-dns-587544c9cf-w6grf\" (UID: \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\") " pod="openstack/dnsmasq-dns-587544c9cf-w6grf" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.969036 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/15372216-fc04-44d8-8268-7dbf3b74eeb7-config\") pod \"neutron-dc699456d-7slk7\" (UID: \"15372216-fc04-44d8-8268-7dbf3b74eeb7\") " pod="openstack/neutron-dc699456d-7slk7" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.969156 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/15372216-fc04-44d8-8268-7dbf3b74eeb7-ovndb-tls-certs\") pod \"neutron-dc699456d-7slk7\" (UID: \"15372216-fc04-44d8-8268-7dbf3b74eeb7\") " pod="openstack/neutron-dc699456d-7slk7" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.969321 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15372216-fc04-44d8-8268-7dbf3b74eeb7-combined-ca-bundle\") pod \"neutron-dc699456d-7slk7\" (UID: \"15372216-fc04-44d8-8268-7dbf3b74eeb7\") " pod="openstack/neutron-dc699456d-7slk7" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.969429 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-ovsdbserver-nb\") pod \"dnsmasq-dns-587544c9cf-w6grf\" (UID: \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\") " pod="openstack/dnsmasq-dns-587544c9cf-w6grf" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.969545 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-dns-swift-storage-0\") pod \"dnsmasq-dns-587544c9cf-w6grf\" (UID: \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\") " pod="openstack/dnsmasq-dns-587544c9cf-w6grf" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.977365 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-ovsdbserver-sb\") pod \"dnsmasq-dns-587544c9cf-w6grf\" (UID: \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\") " pod="openstack/dnsmasq-dns-587544c9cf-w6grf" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.978317 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-dns-swift-storage-0\") pod \"dnsmasq-dns-587544c9cf-w6grf\" (UID: \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\") " pod="openstack/dnsmasq-dns-587544c9cf-w6grf" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.980201 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-config\") pod \"dnsmasq-dns-587544c9cf-w6grf\" (UID: \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\") " pod="openstack/dnsmasq-dns-587544c9cf-w6grf" Dec 06 06:48:16 crc kubenswrapper[4823]: I1206 06:48:16.981782 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-dns-svc\") pod \"dnsmasq-dns-587544c9cf-w6grf\" (UID: \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\") " pod="openstack/dnsmasq-dns-587544c9cf-w6grf" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.002329 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-ovsdbserver-nb\") pod \"dnsmasq-dns-587544c9cf-w6grf\" (UID: \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\") " pod="openstack/dnsmasq-dns-587544c9cf-w6grf" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.012263 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-config-data" (OuterVolumeSpecName: "config-data") pod "2bcc21a4-6b09-4804-86d5-85cc7f0267e7" (UID: "2bcc21a4-6b09-4804-86d5-85cc7f0267e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.068618 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs2fg\" (UniqueName: \"kubernetes.io/projected/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-kube-api-access-qs2fg\") pod \"dnsmasq-dns-587544c9cf-w6grf\" (UID: \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\") " pod="openstack/dnsmasq-dns-587544c9cf-w6grf" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.076827 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m48ts\" (UniqueName: \"kubernetes.io/projected/15372216-fc04-44d8-8268-7dbf3b74eeb7-kube-api-access-m48ts\") pod \"neutron-dc699456d-7slk7\" (UID: \"15372216-fc04-44d8-8268-7dbf3b74eeb7\") " pod="openstack/neutron-dc699456d-7slk7" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.077273 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/15372216-fc04-44d8-8268-7dbf3b74eeb7-config\") pod \"neutron-dc699456d-7slk7\" (UID: \"15372216-fc04-44d8-8268-7dbf3b74eeb7\") " pod="openstack/neutron-dc699456d-7slk7" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.077407 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/15372216-fc04-44d8-8268-7dbf3b74eeb7-ovndb-tls-certs\") pod \"neutron-dc699456d-7slk7\" (UID: \"15372216-fc04-44d8-8268-7dbf3b74eeb7\") " pod="openstack/neutron-dc699456d-7slk7" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.077568 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15372216-fc04-44d8-8268-7dbf3b74eeb7-combined-ca-bundle\") pod \"neutron-dc699456d-7slk7\" (UID: \"15372216-fc04-44d8-8268-7dbf3b74eeb7\") " pod="openstack/neutron-dc699456d-7slk7" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.077818 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/15372216-fc04-44d8-8268-7dbf3b74eeb7-httpd-config\") pod \"neutron-dc699456d-7slk7\" (UID: \"15372216-fc04-44d8-8268-7dbf3b74eeb7\") " pod="openstack/neutron-dc699456d-7slk7" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.078000 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.080112 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-587544c9cf-w6grf"] Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.099041 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "2bcc21a4-6b09-4804-86d5-85cc7f0267e7" (UID: "2bcc21a4-6b09-4804-86d5-85cc7f0267e7"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.145806 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/15372216-fc04-44d8-8268-7dbf3b74eeb7-config\") pod \"neutron-dc699456d-7slk7\" (UID: \"15372216-fc04-44d8-8268-7dbf3b74eeb7\") " pod="openstack/neutron-dc699456d-7slk7" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.147099 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/15372216-fc04-44d8-8268-7dbf3b74eeb7-ovndb-tls-certs\") pod \"neutron-dc699456d-7slk7\" (UID: \"15372216-fc04-44d8-8268-7dbf3b74eeb7\") " pod="openstack/neutron-dc699456d-7slk7" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.147465 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15372216-fc04-44d8-8268-7dbf3b74eeb7-combined-ca-bundle\") pod \"neutron-dc699456d-7slk7\" (UID: \"15372216-fc04-44d8-8268-7dbf3b74eeb7\") " pod="openstack/neutron-dc699456d-7slk7" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.153152 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/15372216-fc04-44d8-8268-7dbf3b74eeb7-httpd-config\") pod \"neutron-dc699456d-7slk7\" (UID: \"15372216-fc04-44d8-8268-7dbf3b74eeb7\") " pod="openstack/neutron-dc699456d-7slk7" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.164091 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m48ts\" (UniqueName: \"kubernetes.io/projected/15372216-fc04-44d8-8268-7dbf3b74eeb7-kube-api-access-m48ts\") pod \"neutron-dc699456d-7slk7\" (UID: \"15372216-fc04-44d8-8268-7dbf3b74eeb7\") " pod="openstack/neutron-dc699456d-7slk7" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.168204 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.181009 4823 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bcc21a4-6b09-4804-86d5-85cc7f0267e7-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.236651 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-587544c9cf-w6grf" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.263951 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bceb03c-e3de-4bfd-b163-69cde861ce00" path="/var/lib/kubelet/pods/9bceb03c-e3de-4bfd-b163-69cde861ce00/volumes" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.288470 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dc699456d-7slk7" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.316355 4823 generic.go:334] "Generic (PLEG): container finished" podID="335c336e-79ff-426e-a360-0c0ea58e8941" containerID="b92e557dd1017d4367e6dc8cb1e3339d81cc13eabe4589addaceb147bdfbc84a" exitCode=0 Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.465427 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.480861 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aa2f666-1f1f-4930-a728-81b27d74a0f8-config-data" (OuterVolumeSpecName: "config-data") pod "6aa2f666-1f1f-4930-a728-81b27d74a0f8" (UID: "6aa2f666-1f1f-4930-a728-81b27d74a0f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.483075 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cdc5bf4b4-qft5r" event={"ID":"2bcc21a4-6b09-4804-86d5-85cc7f0267e7","Type":"ContainerDied","Data":"d94b563647fc919fb468045049d017a24c5e940d9629b6251185df470f430a14"} Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.483124 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dc699456d-7slk7"] Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.483145 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"335c336e-79ff-426e-a360-0c0ea58e8941","Type":"ContainerDied","Data":"b92e557dd1017d4367e6dc8cb1e3339d81cc13eabe4589addaceb147bdfbc84a"} Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.483160 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e9c7-account-create-update-mmhk9" event={"ID":"d3b29187-c6a2-4b88-9215-759fe3cb8dad","Type":"ContainerStarted","Data":"ed4564e0772f5ba97e5fca2d4f39e43d101854cacf3cb82296ff12c687f33974"} Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.483171 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qdzrg" event={"ID":"afb8dbce-6f68-4245-971e-e9087ed93cf1","Type":"ContainerStarted","Data":"8567e59483d764c75a278b01ea267bacab38b7c04024658dae2e437d1a6528a3"} Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.483185 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-abd5-account-create-update-4jjrw"] Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.483197 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-qdzrg"] Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.483216 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-f788j"] Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.483231 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-587544c9cf-w6grf"] Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.483245 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24","Type":"ContainerStarted","Data":"0361cfd70d4afe6b8321a2257452d1edf130b9372a99ab20db5c162575da66fd"} Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.483258 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6aa2f666-1f1f-4930-a728-81b27d74a0f8","Type":"ContainerDied","Data":"c14030713e9b27660a8d9f606b023211449d1e88f81c02059a29a077dd12b1e2"} Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.483273 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-abd5-account-create-update-4jjrw" event={"ID":"199828c4-e1bd-42a8-b35c-ba26f4c980b8","Type":"ContainerStarted","Data":"be636bb7bb51621b737c617c3b4f47538030526e76901c94fbdf0b6c14707ae9"} Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.483364 4823 scope.go:117] "RemoveContainer" containerID="a81160012932675fd601ecff4024d99b9f89d28f93683cfb6c8170e7604051be" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.485427 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6bcdffb5bf-b97n9"] Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.504094 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aa2f666-1f1f-4930-a728-81b27d74a0f8-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.531450 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54b97456bf-s7qh8"] Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.550566 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54b97456bf-s7qh8"] Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.551856 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.594843 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.712959 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/335c336e-79ff-426e-a360-0c0ea58e8941-combined-ca-bundle\") pod \"335c336e-79ff-426e-a360-0c0ea58e8941\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.713088 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/335c336e-79ff-426e-a360-0c0ea58e8941-run-httpd\") pod \"335c336e-79ff-426e-a360-0c0ea58e8941\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.713135 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/335c336e-79ff-426e-a360-0c0ea58e8941-sg-core-conf-yaml\") pod \"335c336e-79ff-426e-a360-0c0ea58e8941\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.713191 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/335c336e-79ff-426e-a360-0c0ea58e8941-config-data\") pod \"335c336e-79ff-426e-a360-0c0ea58e8941\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.713248 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gvlg\" (UniqueName: \"kubernetes.io/projected/335c336e-79ff-426e-a360-0c0ea58e8941-kube-api-access-4gvlg\") pod \"335c336e-79ff-426e-a360-0c0ea58e8941\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.713308 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/335c336e-79ff-426e-a360-0c0ea58e8941-log-httpd\") pod \"335c336e-79ff-426e-a360-0c0ea58e8941\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.713416 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/335c336e-79ff-426e-a360-0c0ea58e8941-scripts\") pod \"335c336e-79ff-426e-a360-0c0ea58e8941\" (UID: \"335c336e-79ff-426e-a360-0c0ea58e8941\") " Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.713985 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-ovsdbserver-nb\") pod \"dnsmasq-dns-54b97456bf-s7qh8\" (UID: \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\") " pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.714106 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-dns-svc\") pod \"dnsmasq-dns-54b97456bf-s7qh8\" (UID: \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\") " pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.714162 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-dns-swift-storage-0\") pod \"dnsmasq-dns-54b97456bf-s7qh8\" (UID: \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\") " pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.714189 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g8nt\" (UniqueName: \"kubernetes.io/projected/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-kube-api-access-2g8nt\") pod \"dnsmasq-dns-54b97456bf-s7qh8\" (UID: \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\") " pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.714273 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-config\") pod \"dnsmasq-dns-54b97456bf-s7qh8\" (UID: \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\") " pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.714326 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-ovsdbserver-sb\") pod \"dnsmasq-dns-54b97456bf-s7qh8\" (UID: \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\") " pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.725085 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/335c336e-79ff-426e-a360-0c0ea58e8941-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "335c336e-79ff-426e-a360-0c0ea58e8941" (UID: "335c336e-79ff-426e-a360-0c0ea58e8941"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.725778 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/335c336e-79ff-426e-a360-0c0ea58e8941-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "335c336e-79ff-426e-a360-0c0ea58e8941" (UID: "335c336e-79ff-426e-a360-0c0ea58e8941"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.806637 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/335c336e-79ff-426e-a360-0c0ea58e8941-scripts" (OuterVolumeSpecName: "scripts") pod "335c336e-79ff-426e-a360-0c0ea58e8941" (UID: "335c336e-79ff-426e-a360-0c0ea58e8941"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.817743 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-config\") pod \"dnsmasq-dns-54b97456bf-s7qh8\" (UID: \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\") " pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.817826 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-ovsdbserver-sb\") pod \"dnsmasq-dns-54b97456bf-s7qh8\" (UID: \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\") " pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.818015 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-ovsdbserver-nb\") pod \"dnsmasq-dns-54b97456bf-s7qh8\" (UID: \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\") " pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.818171 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-dns-svc\") pod \"dnsmasq-dns-54b97456bf-s7qh8\" (UID: \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\") " pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.818236 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-dns-swift-storage-0\") pod \"dnsmasq-dns-54b97456bf-s7qh8\" (UID: \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\") " pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.818280 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g8nt\" (UniqueName: \"kubernetes.io/projected/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-kube-api-access-2g8nt\") pod \"dnsmasq-dns-54b97456bf-s7qh8\" (UID: \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\") " pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.818349 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/335c336e-79ff-426e-a360-0c0ea58e8941-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.818370 4823 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/335c336e-79ff-426e-a360-0c0ea58e8941-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.818382 4823 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/335c336e-79ff-426e-a360-0c0ea58e8941-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.818786 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-config\") pod \"dnsmasq-dns-54b97456bf-s7qh8\" (UID: \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\") " pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.819547 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-ovsdbserver-sb\") pod \"dnsmasq-dns-54b97456bf-s7qh8\" (UID: \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\") " pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.819906 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-dns-svc\") pod \"dnsmasq-dns-54b97456bf-s7qh8\" (UID: \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\") " pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.821596 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-ovsdbserver-nb\") pod \"dnsmasq-dns-54b97456bf-s7qh8\" (UID: \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\") " pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.822453 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/335c336e-79ff-426e-a360-0c0ea58e8941-kube-api-access-4gvlg" (OuterVolumeSpecName: "kube-api-access-4gvlg") pod "335c336e-79ff-426e-a360-0c0ea58e8941" (UID: "335c336e-79ff-426e-a360-0c0ea58e8941"). InnerVolumeSpecName "kube-api-access-4gvlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.835058 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-dns-swift-storage-0\") pod \"dnsmasq-dns-54b97456bf-s7qh8\" (UID: \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\") " pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.840414 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g8nt\" (UniqueName: \"kubernetes.io/projected/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-kube-api-access-2g8nt\") pod \"dnsmasq-dns-54b97456bf-s7qh8\" (UID: \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\") " pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.918947 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 06:48:17 crc kubenswrapper[4823]: E1206 06:48:17.919676 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="335c336e-79ff-426e-a360-0c0ea58e8941" containerName="sg-core" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.919775 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="335c336e-79ff-426e-a360-0c0ea58e8941" containerName="sg-core" Dec 06 06:48:17 crc kubenswrapper[4823]: E1206 06:48:17.919842 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="335c336e-79ff-426e-a360-0c0ea58e8941" containerName="proxy-httpd" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.919895 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="335c336e-79ff-426e-a360-0c0ea58e8941" containerName="proxy-httpd" Dec 06 06:48:17 crc kubenswrapper[4823]: E1206 06:48:17.919956 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="335c336e-79ff-426e-a360-0c0ea58e8941" containerName="ceilometer-central-agent" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.920016 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="335c336e-79ff-426e-a360-0c0ea58e8941" containerName="ceilometer-central-agent" Dec 06 06:48:17 crc kubenswrapper[4823]: E1206 06:48:17.920091 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="335c336e-79ff-426e-a360-0c0ea58e8941" containerName="ceilometer-notification-agent" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.920156 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="335c336e-79ff-426e-a360-0c0ea58e8941" containerName="ceilometer-notification-agent" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.920461 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="335c336e-79ff-426e-a360-0c0ea58e8941" containerName="ceilometer-central-agent" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.920542 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="335c336e-79ff-426e-a360-0c0ea58e8941" containerName="sg-core" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.920619 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="335c336e-79ff-426e-a360-0c0ea58e8941" containerName="ceilometer-notification-agent" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.920739 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="335c336e-79ff-426e-a360-0c0ea58e8941" containerName="proxy-httpd" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.921981 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.920625 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gvlg\" (UniqueName: \"kubernetes.io/projected/335c336e-79ff-426e-a360-0c0ea58e8941-kube-api-access-4gvlg\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.926189 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.926571 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.942220 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" Dec 06 06:48:17 crc kubenswrapper[4823]: I1206 06:48:17.945830 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-rdcrh" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.015184 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-2pbnm"] Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.027171 4823 scope.go:117] "RemoveContainer" containerID="ef7e20b45fa10c9e0534bf0e943c77a7024e8c1acb561998214014561bcd023a" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.028158 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/873b532b-a51c-43b7-87bd-6d80634122b7-logs\") pod \"glance-default-external-api-0\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.028230 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.028312 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgnvj\" (UniqueName: \"kubernetes.io/projected/873b532b-a51c-43b7-87bd-6d80634122b7-kube-api-access-qgnvj\") pod \"glance-default-external-api-0\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.028349 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/873b532b-a51c-43b7-87bd-6d80634122b7-config-data\") pod \"glance-default-external-api-0\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.028432 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873b532b-a51c-43b7-87bd-6d80634122b7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.028492 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/873b532b-a51c-43b7-87bd-6d80634122b7-scripts\") pod \"glance-default-external-api-0\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.028542 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/873b532b-a51c-43b7-87bd-6d80634122b7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.085898 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.115779 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.132144 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgnvj\" (UniqueName: \"kubernetes.io/projected/873b532b-a51c-43b7-87bd-6d80634122b7-kube-api-access-qgnvj\") pod \"glance-default-external-api-0\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.132216 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/873b532b-a51c-43b7-87bd-6d80634122b7-config-data\") pod \"glance-default-external-api-0\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.132301 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873b532b-a51c-43b7-87bd-6d80634122b7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.132353 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/873b532b-a51c-43b7-87bd-6d80634122b7-scripts\") pod \"glance-default-external-api-0\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.132390 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/873b532b-a51c-43b7-87bd-6d80634122b7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.132457 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/873b532b-a51c-43b7-87bd-6d80634122b7-logs\") pod \"glance-default-external-api-0\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.132500 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.132930 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.138199 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/873b532b-a51c-43b7-87bd-6d80634122b7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.140889 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/873b532b-a51c-43b7-87bd-6d80634122b7-logs\") pod \"glance-default-external-api-0\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.165766 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.167829 4823 scope.go:117] "RemoveContainer" containerID="c037fe83a3e74d753802e3502904161196d7284b68a052fdcd7b6a41bfe6d55d" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.176709 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873b532b-a51c-43b7-87bd-6d80634122b7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.181685 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/873b532b-a51c-43b7-87bd-6d80634122b7-scripts\") pod \"glance-default-external-api-0\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.182555 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgnvj\" (UniqueName: \"kubernetes.io/projected/873b532b-a51c-43b7-87bd-6d80634122b7-kube-api-access-qgnvj\") pod \"glance-default-external-api-0\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.183829 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/873b532b-a51c-43b7-87bd-6d80634122b7-config-data\") pod \"glance-default-external-api-0\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.217527 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-5f48-account-create-update-9428s"] Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.244049 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.253966 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.254123 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.258416 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.296778 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l6rbl"] Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.341033 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2k9t\" (UniqueName: \"kubernetes.io/projected/9b5dc60b-23c7-4e50-8944-3917a44ad224-kube-api-access-d2k9t\") pod \"cinder-scheduler-0\" (UID: \"9b5dc60b-23c7-4e50-8944-3917a44ad224\") " pod="openstack/cinder-scheduler-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.341132 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b5dc60b-23c7-4e50-8944-3917a44ad224-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9b5dc60b-23c7-4e50-8944-3917a44ad224\") " pod="openstack/cinder-scheduler-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.341186 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b5dc60b-23c7-4e50-8944-3917a44ad224-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9b5dc60b-23c7-4e50-8944-3917a44ad224\") " pod="openstack/cinder-scheduler-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.341345 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b5dc60b-23c7-4e50-8944-3917a44ad224-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9b5dc60b-23c7-4e50-8944-3917a44ad224\") " pod="openstack/cinder-scheduler-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.341383 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b5dc60b-23c7-4e50-8944-3917a44ad224-config-data\") pod \"cinder-scheduler-0\" (UID: \"9b5dc60b-23c7-4e50-8944-3917a44ad224\") " pod="openstack/cinder-scheduler-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.341422 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b5dc60b-23c7-4e50-8944-3917a44ad224-scripts\") pod \"cinder-scheduler-0\" (UID: \"9b5dc60b-23c7-4e50-8944-3917a44ad224\") " pod="openstack/cinder-scheduler-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.448885 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b5dc60b-23c7-4e50-8944-3917a44ad224-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9b5dc60b-23c7-4e50-8944-3917a44ad224\") " pod="openstack/cinder-scheduler-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.449373 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b5dc60b-23c7-4e50-8944-3917a44ad224-config-data\") pod \"cinder-scheduler-0\" (UID: \"9b5dc60b-23c7-4e50-8944-3917a44ad224\") " pod="openstack/cinder-scheduler-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.449434 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b5dc60b-23c7-4e50-8944-3917a44ad224-scripts\") pod \"cinder-scheduler-0\" (UID: \"9b5dc60b-23c7-4e50-8944-3917a44ad224\") " pod="openstack/cinder-scheduler-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.449550 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2k9t\" (UniqueName: \"kubernetes.io/projected/9b5dc60b-23c7-4e50-8944-3917a44ad224-kube-api-access-d2k9t\") pod \"cinder-scheduler-0\" (UID: \"9b5dc60b-23c7-4e50-8944-3917a44ad224\") " pod="openstack/cinder-scheduler-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.449643 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b5dc60b-23c7-4e50-8944-3917a44ad224-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9b5dc60b-23c7-4e50-8944-3917a44ad224\") " pod="openstack/cinder-scheduler-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.449719 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b5dc60b-23c7-4e50-8944-3917a44ad224-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9b5dc60b-23c7-4e50-8944-3917a44ad224\") " pod="openstack/cinder-scheduler-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.451831 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b5dc60b-23c7-4e50-8944-3917a44ad224-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9b5dc60b-23c7-4e50-8944-3917a44ad224\") " pod="openstack/cinder-scheduler-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.552454 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.560787 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b5dc60b-23c7-4e50-8944-3917a44ad224-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9b5dc60b-23c7-4e50-8944-3917a44ad224\") " pod="openstack/cinder-scheduler-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.569598 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.579451 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.585053 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-2pbnm" event={"ID":"20c6e0b9-0c53-43ea-a471-9076b51f877b","Type":"ContainerStarted","Data":"754ca6836515becaea394b66c815b51edc374b807ecc2806f615c091d8b85716"} Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.631799 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.641295 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-abd5-account-create-update-4jjrw" event={"ID":"199828c4-e1bd-42a8-b35c-ba26f4c980b8","Type":"ContainerStarted","Data":"b3a1e8bde4132a6cbd5278eb2ca708e0dc12cd7c9e2443445a95b2e7f0420b28"} Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.656995 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6rbl" event={"ID":"06a9a9aa-962e-4cf1-afe9-b831a56f3837","Type":"ContainerStarted","Data":"e294750f8d5d730431ba90a69f25e9fb8438a82381a65ac2c19e664a7d4c8e87"} Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.668507 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32dd3184-895e-4094-8360-5ffe3627daf2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.668586 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32dd3184-895e-4094-8360-5ffe3627daf2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.668623 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32dd3184-895e-4094-8360-5ffe3627daf2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.668666 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxn2l\" (UniqueName: \"kubernetes.io/projected/32dd3184-895e-4094-8360-5ffe3627daf2-kube-api-access-lxn2l\") pod \"glance-default-internal-api-0\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.668782 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32dd3184-895e-4094-8360-5ffe3627daf2-logs\") pod \"glance-default-internal-api-0\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.668959 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.669010 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32dd3184-895e-4094-8360-5ffe3627daf2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.705726 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-587544c9cf-w6grf"] Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.711557 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"335c336e-79ff-426e-a360-0c0ea58e8941","Type":"ContainerDied","Data":"12ca0d95d1957d9526cdbc0ac2d538bd9891fa3f24a0145390eaa7755fb9d89d"} Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.711782 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.726246 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b5dc60b-23c7-4e50-8944-3917a44ad224-config-data\") pod \"cinder-scheduler-0\" (UID: \"9b5dc60b-23c7-4e50-8944-3917a44ad224\") " pod="openstack/cinder-scheduler-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.726839 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b5dc60b-23c7-4e50-8944-3917a44ad224-scripts\") pod \"cinder-scheduler-0\" (UID: \"9b5dc60b-23c7-4e50-8944-3917a44ad224\") " pod="openstack/cinder-scheduler-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.727158 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e9c7-account-create-update-mmhk9" event={"ID":"d3b29187-c6a2-4b88-9215-759fe3cb8dad","Type":"ContainerStarted","Data":"805375e951eb3e8329cc48a973f91fcc6c76c9f2fd86979d507b8407f0b574f9"} Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.731419 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2k9t\" (UniqueName: \"kubernetes.io/projected/9b5dc60b-23c7-4e50-8944-3917a44ad224-kube-api-access-d2k9t\") pod \"cinder-scheduler-0\" (UID: \"9b5dc60b-23c7-4e50-8944-3917a44ad224\") " pod="openstack/cinder-scheduler-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.734837 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6bcdffb5bf-b97n9" event={"ID":"58b74f3f-7d40-4aae-a70c-95ff51beca50","Type":"ContainerStarted","Data":"cc38218ce561f1af6038ed11223a46cf841cb806511e1b8c8843cc95fa828058"} Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.736441 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b5dc60b-23c7-4e50-8944-3917a44ad224-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9b5dc60b-23c7-4e50-8944-3917a44ad224\") " pod="openstack/cinder-scheduler-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.749940 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-f788j" event={"ID":"7930a7dd-359c-4d6d-9a66-de8eaa5f6f60","Type":"ContainerStarted","Data":"d23ce9403ba831072755b20f8707bb1723ad6d7b4f985a30343ee7110dd40f0f"} Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.769067 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5f48-account-create-update-9428s" event={"ID":"0439c056-347b-4a05-95aa-e85289754ecc","Type":"ContainerStarted","Data":"3b9280346d195efdff10a8e82159cbf5322b2a3fa131576c25838c4a7020ee69"} Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.770665 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxn2l\" (UniqueName: \"kubernetes.io/projected/32dd3184-895e-4094-8360-5ffe3627daf2-kube-api-access-lxn2l\") pod \"glance-default-internal-api-0\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.770764 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32dd3184-895e-4094-8360-5ffe3627daf2-logs\") pod \"glance-default-internal-api-0\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.779396 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.779508 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32dd3184-895e-4094-8360-5ffe3627daf2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.779827 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32dd3184-895e-4094-8360-5ffe3627daf2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.779900 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32dd3184-895e-4094-8360-5ffe3627daf2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.779959 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32dd3184-895e-4094-8360-5ffe3627daf2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.790168 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.790611 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32dd3184-895e-4094-8360-5ffe3627daf2-logs\") pod \"glance-default-internal-api-0\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.797630 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32dd3184-895e-4094-8360-5ffe3627daf2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.805680 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32dd3184-895e-4094-8360-5ffe3627daf2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.810254 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-abd5-account-create-update-4jjrw" podStartSLOduration=11.810219792 podStartE2EDuration="11.810219792s" podCreationTimestamp="2025-12-06 06:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:48:18.674337966 +0000 UTC m=+1399.960089926" watchObservedRunningTime="2025-12-06 06:48:18.810219792 +0000 UTC m=+1400.095971752" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.810779 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dc699456d-7slk7"] Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.838728 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxn2l\" (UniqueName: \"kubernetes.io/projected/32dd3184-895e-4094-8360-5ffe3627daf2-kube-api-access-lxn2l\") pod \"glance-default-internal-api-0\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.869195 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-e9c7-account-create-update-mmhk9" podStartSLOduration=11.869163635 podStartE2EDuration="11.869163635s" podCreationTimestamp="2025-12-06 06:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:48:18.769476035 +0000 UTC m=+1400.055227995" watchObservedRunningTime="2025-12-06 06:48:18.869163635 +0000 UTC m=+1400.154915595" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.923321 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32dd3184-895e-4094-8360-5ffe3627daf2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:18 crc kubenswrapper[4823]: I1206 06:48:18.931068 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32dd3184-895e-4094-8360-5ffe3627daf2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.348588 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aa2f666-1f1f-4930-a728-81b27d74a0f8" path="/var/lib/kubelet/pods/6aa2f666-1f1f-4930-a728-81b27d74a0f8/volumes" Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.382291 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/335c336e-79ff-426e-a360-0c0ea58e8941-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "335c336e-79ff-426e-a360-0c0ea58e8941" (UID: "335c336e-79ff-426e-a360-0c0ea58e8941"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.410971 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.418261 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/335c336e-79ff-426e-a360-0c0ea58e8941-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "335c336e-79ff-426e-a360-0c0ea58e8941" (UID: "335c336e-79ff-426e-a360-0c0ea58e8941"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.419861 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.449896 4823 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/335c336e-79ff-426e-a360-0c0ea58e8941-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.449946 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/335c336e-79ff-426e-a360-0c0ea58e8941-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.484427 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/335c336e-79ff-426e-a360-0c0ea58e8941-config-data" (OuterVolumeSpecName: "config-data") pod "335c336e-79ff-426e-a360-0c0ea58e8941" (UID: "335c336e-79ff-426e-a360-0c0ea58e8941"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.496084 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.551602 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/335c336e-79ff-426e-a360-0c0ea58e8941-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.755939 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54b97456bf-s7qh8"] Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.797769 4823 scope.go:117] "RemoveContainer" containerID="ce9e0741c3380a1227df57cce5b2d72c1a3177dc3b33c0439fbc5e078864b731" Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.821625 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qdzrg" event={"ID":"afb8dbce-6f68-4245-971e-e9087ed93cf1","Type":"ContainerStarted","Data":"72f1a5453be0ee81c3a3de2c9eac46e9620926a078629fc9420781a5d9a30395"} Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.831770 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-f788j" event={"ID":"7930a7dd-359c-4d6d-9a66-de8eaa5f6f60","Type":"ContainerStarted","Data":"a1941d82aa7c25d9ec68304e1c06c5806c4fb64465e8c386f8e04559e1a97de0"} Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.833853 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc699456d-7slk7" event={"ID":"15372216-fc04-44d8-8268-7dbf3b74eeb7","Type":"ContainerStarted","Data":"0998216dcecd0fc8e85aaa8950b43e8ede75f83dcbb6fcdce0a67c9b2c227824"} Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.836393 4823 generic.go:334] "Generic (PLEG): container finished" podID="199828c4-e1bd-42a8-b35c-ba26f4c980b8" containerID="b3a1e8bde4132a6cbd5278eb2ca708e0dc12cd7c9e2443445a95b2e7f0420b28" exitCode=0 Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.836456 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-abd5-account-create-update-4jjrw" event={"ID":"199828c4-e1bd-42a8-b35c-ba26f4c980b8","Type":"ContainerDied","Data":"b3a1e8bde4132a6cbd5278eb2ca708e0dc12cd7c9e2443445a95b2e7f0420b28"} Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.838776 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" event={"ID":"21e6e5f3-ac1d-48b6-871a-8a8d52cee775","Type":"ContainerStarted","Data":"b8a60b1e3bc1985cf46dbb083522a702f63852240c1068cd19b44e8b5d225689"} Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.859176 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-qdzrg" podStartSLOduration=12.859152726 podStartE2EDuration="12.859152726s" podCreationTimestamp="2025-12-06 06:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:48:19.839604411 +0000 UTC m=+1401.125356371" watchObservedRunningTime="2025-12-06 06:48:19.859152726 +0000 UTC m=+1401.144904686" Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.866225 4823 generic.go:334] "Generic (PLEG): container finished" podID="d3b29187-c6a2-4b88-9215-759fe3cb8dad" containerID="805375e951eb3e8329cc48a973f91fcc6c76c9f2fd86979d507b8407f0b574f9" exitCode=0 Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.866884 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e9c7-account-create-update-mmhk9" event={"ID":"d3b29187-c6a2-4b88-9215-759fe3cb8dad","Type":"ContainerDied","Data":"805375e951eb3e8329cc48a973f91fcc6c76c9f2fd86979d507b8407f0b574f9"} Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.877436 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-f788j" podStartSLOduration=12.877408504 podStartE2EDuration="12.877408504s" podCreationTimestamp="2025-12-06 06:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:48:19.862234825 +0000 UTC m=+1401.147986785" watchObservedRunningTime="2025-12-06 06:48:19.877408504 +0000 UTC m=+1401.163160474" Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.878958 4823 scope.go:117] "RemoveContainer" containerID="41aeecb1f5c3dec1c632880310f9cd74af0481cd9e260f9abb46d0bf63e3a807" Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.882774 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.882984 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-587544c9cf-w6grf" event={"ID":"6f95d7d5-1ff4-4b6b-9451-dd4511295eba","Type":"ContainerStarted","Data":"ea73b2fb68597cc8f121e6f4e6a0a7ad56eb0322fdb4df569b8d8eb556a04bbe"} Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.899161 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.916379 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.929310 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:48:19 crc kubenswrapper[4823]: I1206 06:48:19.988815 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:19.999211 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.003073 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.033821 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.068833 4823 scope.go:117] "RemoveContainer" containerID="30f6cedc50388165455a9dca871dcfdce66dfdf8313c8dd60f032bea7953b0f9" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.139290 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.216960 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e9a933e-7013-4daa-92f7-8809b5f09042-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " pod="openstack/ceilometer-0" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.217123 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e9a933e-7013-4daa-92f7-8809b5f09042-log-httpd\") pod \"ceilometer-0\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " pod="openstack/ceilometer-0" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.217231 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9a933e-7013-4daa-92f7-8809b5f09042-config-data\") pod \"ceilometer-0\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " pod="openstack/ceilometer-0" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.217282 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e9a933e-7013-4daa-92f7-8809b5f09042-scripts\") pod \"ceilometer-0\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " pod="openstack/ceilometer-0" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.217444 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjhdd\" (UniqueName: \"kubernetes.io/projected/9e9a933e-7013-4daa-92f7-8809b5f09042-kube-api-access-kjhdd\") pod \"ceilometer-0\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " pod="openstack/ceilometer-0" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.217496 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9a933e-7013-4daa-92f7-8809b5f09042-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " pod="openstack/ceilometer-0" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.217615 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e9a933e-7013-4daa-92f7-8809b5f09042-run-httpd\") pod \"ceilometer-0\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " pod="openstack/ceilometer-0" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.219433 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.231458 4823 scope.go:117] "RemoveContainer" containerID="b92e557dd1017d4367e6dc8cb1e3339d81cc13eabe4589addaceb147bdfbc84a" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.319846 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjhdd\" (UniqueName: \"kubernetes.io/projected/9e9a933e-7013-4daa-92f7-8809b5f09042-kube-api-access-kjhdd\") pod \"ceilometer-0\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " pod="openstack/ceilometer-0" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.319889 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9a933e-7013-4daa-92f7-8809b5f09042-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " pod="openstack/ceilometer-0" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.319948 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e9a933e-7013-4daa-92f7-8809b5f09042-run-httpd\") pod \"ceilometer-0\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " pod="openstack/ceilometer-0" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.319986 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e9a933e-7013-4daa-92f7-8809b5f09042-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " pod="openstack/ceilometer-0" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.320033 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e9a933e-7013-4daa-92f7-8809b5f09042-log-httpd\") pod \"ceilometer-0\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " pod="openstack/ceilometer-0" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.320081 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9a933e-7013-4daa-92f7-8809b5f09042-config-data\") pod \"ceilometer-0\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " pod="openstack/ceilometer-0" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.320111 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e9a933e-7013-4daa-92f7-8809b5f09042-scripts\") pod \"ceilometer-0\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " pod="openstack/ceilometer-0" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.325307 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e9a933e-7013-4daa-92f7-8809b5f09042-log-httpd\") pod \"ceilometer-0\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " pod="openstack/ceilometer-0" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.325586 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e9a933e-7013-4daa-92f7-8809b5f09042-run-httpd\") pod \"ceilometer-0\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " pod="openstack/ceilometer-0" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.337150 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9a933e-7013-4daa-92f7-8809b5f09042-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " pod="openstack/ceilometer-0" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.337358 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e9a933e-7013-4daa-92f7-8809b5f09042-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " pod="openstack/ceilometer-0" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.337378 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9a933e-7013-4daa-92f7-8809b5f09042-config-data\") pod \"ceilometer-0\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " pod="openstack/ceilometer-0" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.337695 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e9a933e-7013-4daa-92f7-8809b5f09042-scripts\") pod \"ceilometer-0\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " pod="openstack/ceilometer-0" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.351189 4823 scope.go:117] "RemoveContainer" containerID="4ceef2af5cfaed2f862e81044aad73dbeef6768c74f17686033db1f69f407650" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.396474 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjhdd\" (UniqueName: \"kubernetes.io/projected/9e9a933e-7013-4daa-92f7-8809b5f09042-kube-api-access-kjhdd\") pod \"ceilometer-0\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " pod="openstack/ceilometer-0" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.435216 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.466970 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.508147 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="56b86320-1d45-4a65-8be8-b2f0d6a6395e" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.173:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.613537 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 06:48:20 crc kubenswrapper[4823]: W1206 06:48:20.726709 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b5dc60b_23c7_4e50_8944_3917a44ad224.slice/crio-f7956bdcfe5ddc60a4723221e643bb1853e2430aa13a735105e7c8fdf3cce2cf WatchSource:0}: Error finding container f7956bdcfe5ddc60a4723221e643bb1853e2430aa13a735105e7c8fdf3cce2cf: Status 404 returned error can't find the container with id f7956bdcfe5ddc60a4723221e643bb1853e2430aa13a735105e7c8fdf3cce2cf Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.733196 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.930210 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc699456d-7slk7" event={"ID":"15372216-fc04-44d8-8268-7dbf3b74eeb7","Type":"ContainerStarted","Data":"6a72b8947a46f7bf870e3d45ac3ddad3f60a11982084a374c762d6001905106a"} Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.938319 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9b5dc60b-23c7-4e50-8944-3917a44ad224","Type":"ContainerStarted","Data":"f7956bdcfe5ddc60a4723221e643bb1853e2430aa13a735105e7c8fdf3cce2cf"} Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.966933 4823 generic.go:334] "Generic (PLEG): container finished" podID="6f95d7d5-1ff4-4b6b-9451-dd4511295eba" containerID="ae57fffbf7bb6ecc710949cddbe2d5fb4123001859d062e19eb8c828a65e039b" exitCode=0 Dec 06 06:48:20 crc kubenswrapper[4823]: I1206 06:48:20.967037 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-587544c9cf-w6grf" event={"ID":"6f95d7d5-1ff4-4b6b-9451-dd4511295eba","Type":"ContainerDied","Data":"ae57fffbf7bb6ecc710949cddbe2d5fb4123001859d062e19eb8c828a65e039b"} Dec 06 06:48:21 crc kubenswrapper[4823]: I1206 06:48:21.043103 4823 generic.go:334] "Generic (PLEG): container finished" podID="afb8dbce-6f68-4245-971e-e9087ed93cf1" containerID="72f1a5453be0ee81c3a3de2c9eac46e9620926a078629fc9420781a5d9a30395" exitCode=0 Dec 06 06:48:21 crc kubenswrapper[4823]: I1206 06:48:21.043854 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qdzrg" event={"ID":"afb8dbce-6f68-4245-971e-e9087ed93cf1","Type":"ContainerDied","Data":"72f1a5453be0ee81c3a3de2c9eac46e9620926a078629fc9420781a5d9a30395"} Dec 06 06:48:21 crc kubenswrapper[4823]: I1206 06:48:21.080175 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 06:48:21 crc kubenswrapper[4823]: I1206 06:48:21.110487 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6bcdffb5bf-b97n9" event={"ID":"58b74f3f-7d40-4aae-a70c-95ff51beca50","Type":"ContainerStarted","Data":"7b97b678bfe286fea36755d326676f66ffe1bb6ec0fef3117760cfadcdac1c3f"} Dec 06 06:48:21 crc kubenswrapper[4823]: I1206 06:48:21.115993 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5f48-account-create-update-9428s" event={"ID":"0439c056-347b-4a05-95aa-e85289754ecc","Type":"ContainerStarted","Data":"71f6e30d66b6d046a259a4606ba2845e1ec385954ace5f6911d474e41efa8b25"} Dec 06 06:48:21 crc kubenswrapper[4823]: I1206 06:48:21.122605 4823 generic.go:334] "Generic (PLEG): container finished" podID="06a9a9aa-962e-4cf1-afe9-b831a56f3837" containerID="8cf09828a572b82e600434c6287dcccf99a2ff2fed74e2cf6ca73faa82960110" exitCode=0 Dec 06 06:48:21 crc kubenswrapper[4823]: I1206 06:48:21.122737 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6rbl" event={"ID":"06a9a9aa-962e-4cf1-afe9-b831a56f3837","Type":"ContainerDied","Data":"8cf09828a572b82e600434c6287dcccf99a2ff2fed74e2cf6ca73faa82960110"} Dec 06 06:48:21 crc kubenswrapper[4823]: I1206 06:48:21.135899 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-5f48-account-create-update-9428s" podStartSLOduration=14.135870031 podStartE2EDuration="14.135870031s" podCreationTimestamp="2025-12-06 06:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:48:21.132191084 +0000 UTC m=+1402.417943044" watchObservedRunningTime="2025-12-06 06:48:21.135870031 +0000 UTC m=+1402.421621991" Dec 06 06:48:21 crc kubenswrapper[4823]: I1206 06:48:21.137101 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-2pbnm" event={"ID":"20c6e0b9-0c53-43ea-a471-9076b51f877b","Type":"ContainerStarted","Data":"b46f30ace5e01e63333fc468cf37f7f89802f526709ad97d0690873443e39cb4"} Dec 06 06:48:21 crc kubenswrapper[4823]: I1206 06:48:21.157175 4823 generic.go:334] "Generic (PLEG): container finished" podID="7930a7dd-359c-4d6d-9a66-de8eaa5f6f60" containerID="a1941d82aa7c25d9ec68304e1c06c5806c4fb64465e8c386f8e04559e1a97de0" exitCode=0 Dec 06 06:48:21 crc kubenswrapper[4823]: I1206 06:48:21.180293 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="335c336e-79ff-426e-a360-0c0ea58e8941" path="/var/lib/kubelet/pods/335c336e-79ff-426e-a360-0c0ea58e8941/volumes" Dec 06 06:48:21 crc kubenswrapper[4823]: I1206 06:48:21.181219 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"873b532b-a51c-43b7-87bd-6d80634122b7","Type":"ContainerStarted","Data":"70e9f3e72753605bad3132bf8e595b16d60787c14c10a47af5f60cf8b02af169"} Dec 06 06:48:21 crc kubenswrapper[4823]: I1206 06:48:21.181254 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 06 06:48:21 crc kubenswrapper[4823]: I1206 06:48:21.181270 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-f788j" event={"ID":"7930a7dd-359c-4d6d-9a66-de8eaa5f6f60","Type":"ContainerDied","Data":"a1941d82aa7c25d9ec68304e1c06c5806c4fb64465e8c386f8e04559e1a97de0"} Dec 06 06:48:21 crc kubenswrapper[4823]: I1206 06:48:21.270329 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-2pbnm" podStartSLOduration=14.270300395 podStartE2EDuration="14.270300395s" podCreationTimestamp="2025-12-06 06:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:48:21.175488166 +0000 UTC m=+1402.461240136" watchObservedRunningTime="2025-12-06 06:48:21.270300395 +0000 UTC m=+1402.556052365" Dec 06 06:48:21 crc kubenswrapper[4823]: I1206 06:48:21.307899 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Dec 06 06:48:21 crc kubenswrapper[4823]: I1206 06:48:21.804762 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-587544c9cf-w6grf" Dec 06 06:48:21 crc kubenswrapper[4823]: I1206 06:48:21.855526 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 06:48:21 crc kubenswrapper[4823]: I1206 06:48:21.972281 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 06:48:21 crc kubenswrapper[4823]: I1206 06:48:21.994453 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-dns-swift-storage-0\") pod \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\" (UID: \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\") " Dec 06 06:48:21 crc kubenswrapper[4823]: I1206 06:48:21.994584 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs2fg\" (UniqueName: \"kubernetes.io/projected/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-kube-api-access-qs2fg\") pod \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\" (UID: \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\") " Dec 06 06:48:21 crc kubenswrapper[4823]: I1206 06:48:21.994631 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-dns-svc\") pod \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\" (UID: \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\") " Dec 06 06:48:21 crc kubenswrapper[4823]: I1206 06:48:21.994753 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-ovsdbserver-nb\") pod \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\" (UID: \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\") " Dec 06 06:48:21 crc kubenswrapper[4823]: I1206 06:48:21.994788 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-config\") pod \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\" (UID: \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\") " Dec 06 06:48:21 crc kubenswrapper[4823]: I1206 06:48:21.994899 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-ovsdbserver-sb\") pod \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\" (UID: \"6f95d7d5-1ff4-4b6b-9451-dd4511295eba\") " Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.028100 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.028337 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-kube-api-access-qs2fg" (OuterVolumeSpecName: "kube-api-access-qs2fg") pod "6f95d7d5-1ff4-4b6b-9451-dd4511295eba" (UID: "6f95d7d5-1ff4-4b6b-9451-dd4511295eba"). InnerVolumeSpecName "kube-api-access-qs2fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.038800 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6f95d7d5-1ff4-4b6b-9451-dd4511295eba" (UID: "6f95d7d5-1ff4-4b6b-9451-dd4511295eba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.064041 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6f95d7d5-1ff4-4b6b-9451-dd4511295eba" (UID: "6f95d7d5-1ff4-4b6b-9451-dd4511295eba"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.073534 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-config" (OuterVolumeSpecName: "config") pod "6f95d7d5-1ff4-4b6b-9451-dd4511295eba" (UID: "6f95d7d5-1ff4-4b6b-9451-dd4511295eba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.097201 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs2fg\" (UniqueName: \"kubernetes.io/projected/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-kube-api-access-qs2fg\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.097242 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.097255 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.097263 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.102419 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6f95d7d5-1ff4-4b6b-9451-dd4511295eba" (UID: "6f95d7d5-1ff4-4b6b-9451-dd4511295eba"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.131430 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6f95d7d5-1ff4-4b6b-9451-dd4511295eba" (UID: "6f95d7d5-1ff4-4b6b-9451-dd4511295eba"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.198774 4823 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.198829 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f95d7d5-1ff4-4b6b-9451-dd4511295eba-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.225806 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc699456d-7slk7" event={"ID":"15372216-fc04-44d8-8268-7dbf3b74eeb7","Type":"ContainerStarted","Data":"53ee6989e7fbf101b0ddde565302bc65b9407ab427347fe0d1879284612869bf"} Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.226210 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-dc699456d-7slk7" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.235410 4823 generic.go:334] "Generic (PLEG): container finished" podID="21e6e5f3-ac1d-48b6-871a-8a8d52cee775" containerID="0827169bde32268cacecde80378c5bf96429dd9c43f027b7a54b0454b194fdc5" exitCode=0 Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.235512 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" event={"ID":"21e6e5f3-ac1d-48b6-871a-8a8d52cee775","Type":"ContainerDied","Data":"0827169bde32268cacecde80378c5bf96429dd9c43f027b7a54b0454b194fdc5"} Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.247116 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e9a933e-7013-4daa-92f7-8809b5f09042","Type":"ContainerStarted","Data":"f9652949a4c9a2de904e28dc7b948aaf15789011d650bae27e783d09808b345a"} Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.259106 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dc699456d-7slk7" podStartSLOduration=6.259081062 podStartE2EDuration="6.259081062s" podCreationTimestamp="2025-12-06 06:48:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:48:22.252293686 +0000 UTC m=+1403.538045666" watchObservedRunningTime="2025-12-06 06:48:22.259081062 +0000 UTC m=+1403.544833022" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.274547 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-abd5-account-create-update-4jjrw" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.275340 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-587544c9cf-w6grf" event={"ID":"6f95d7d5-1ff4-4b6b-9451-dd4511295eba","Type":"ContainerDied","Data":"ea73b2fb68597cc8f121e6f4e6a0a7ad56eb0322fdb4df569b8d8eb556a04bbe"} Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.275406 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-587544c9cf-w6grf" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.275456 4823 scope.go:117] "RemoveContainer" containerID="ae57fffbf7bb6ecc710949cddbe2d5fb4123001859d062e19eb8c828a65e039b" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.318128 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6bcdffb5bf-b97n9" event={"ID":"58b74f3f-7d40-4aae-a70c-95ff51beca50","Type":"ContainerStarted","Data":"265cd1709c612739a42cd1f04e7293bce65d7952f08a5c95395fb09eed9289b2"} Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.319441 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.319479 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.323603 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e9c7-account-create-update-mmhk9" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.357206 4823 generic.go:334] "Generic (PLEG): container finished" podID="0439c056-347b-4a05-95aa-e85289754ecc" containerID="71f6e30d66b6d046a259a4606ba2845e1ec385954ace5f6911d474e41efa8b25" exitCode=0 Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.357280 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5f48-account-create-update-9428s" event={"ID":"0439c056-347b-4a05-95aa-e85289754ecc","Type":"ContainerDied","Data":"71f6e30d66b6d046a259a4606ba2845e1ec385954ace5f6911d474e41efa8b25"} Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.376059 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"32dd3184-895e-4094-8360-5ffe3627daf2","Type":"ContainerStarted","Data":"73a38440cfbfbd2b5370c8a0b26c3576f136146fabadd08eec71c5b560e5de36"} Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.417356 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3b29187-c6a2-4b88-9215-759fe3cb8dad-operator-scripts\") pod \"d3b29187-c6a2-4b88-9215-759fe3cb8dad\" (UID: \"d3b29187-c6a2-4b88-9215-759fe3cb8dad\") " Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.417550 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zddqb\" (UniqueName: \"kubernetes.io/projected/d3b29187-c6a2-4b88-9215-759fe3cb8dad-kube-api-access-zddqb\") pod \"d3b29187-c6a2-4b88-9215-759fe3cb8dad\" (UID: \"d3b29187-c6a2-4b88-9215-759fe3cb8dad\") " Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.417783 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/199828c4-e1bd-42a8-b35c-ba26f4c980b8-operator-scripts\") pod \"199828c4-e1bd-42a8-b35c-ba26f4c980b8\" (UID: \"199828c4-e1bd-42a8-b35c-ba26f4c980b8\") " Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.417819 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqj94\" (UniqueName: \"kubernetes.io/projected/199828c4-e1bd-42a8-b35c-ba26f4c980b8-kube-api-access-jqj94\") pod \"199828c4-e1bd-42a8-b35c-ba26f4c980b8\" (UID: \"199828c4-e1bd-42a8-b35c-ba26f4c980b8\") " Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.425640 4823 generic.go:334] "Generic (PLEG): container finished" podID="20c6e0b9-0c53-43ea-a471-9076b51f877b" containerID="b46f30ace5e01e63333fc468cf37f7f89802f526709ad97d0690873443e39cb4" exitCode=0 Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.426934 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-2pbnm" event={"ID":"20c6e0b9-0c53-43ea-a471-9076b51f877b","Type":"ContainerDied","Data":"b46f30ace5e01e63333fc468cf37f7f89802f526709ad97d0690873443e39cb4"} Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.428388 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3b29187-c6a2-4b88-9215-759fe3cb8dad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d3b29187-c6a2-4b88-9215-759fe3cb8dad" (UID: "d3b29187-c6a2-4b88-9215-759fe3cb8dad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.433334 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3b29187-c6a2-4b88-9215-759fe3cb8dad-kube-api-access-zddqb" (OuterVolumeSpecName: "kube-api-access-zddqb") pod "d3b29187-c6a2-4b88-9215-759fe3cb8dad" (UID: "d3b29187-c6a2-4b88-9215-759fe3cb8dad"). InnerVolumeSpecName "kube-api-access-zddqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.433820 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/199828c4-e1bd-42a8-b35c-ba26f4c980b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "199828c4-e1bd-42a8-b35c-ba26f4c980b8" (UID: "199828c4-e1bd-42a8-b35c-ba26f4c980b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.514334 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/199828c4-e1bd-42a8-b35c-ba26f4c980b8-kube-api-access-jqj94" (OuterVolumeSpecName: "kube-api-access-jqj94") pod "199828c4-e1bd-42a8-b35c-ba26f4c980b8" (UID: "199828c4-e1bd-42a8-b35c-ba26f4c980b8"). InnerVolumeSpecName "kube-api-access-jqj94". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.523169 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/199828c4-e1bd-42a8-b35c-ba26f4c980b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.523199 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqj94\" (UniqueName: \"kubernetes.io/projected/199828c4-e1bd-42a8-b35c-ba26f4c980b8-kube-api-access-jqj94\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.523209 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3b29187-c6a2-4b88-9215-759fe3cb8dad-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.523218 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zddqb\" (UniqueName: \"kubernetes.io/projected/d3b29187-c6a2-4b88-9215-759fe3cb8dad-kube-api-access-zddqb\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.579006 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-587544c9cf-w6grf"] Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.642179 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-587544c9cf-w6grf"] Dec 06 06:48:22 crc kubenswrapper[4823]: I1206 06:48:22.787705 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6bcdffb5bf-b97n9" podStartSLOduration=11.787661394 podStartE2EDuration="11.787661394s" podCreationTimestamp="2025-12-06 06:48:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:48:22.500212478 +0000 UTC m=+1403.785964438" watchObservedRunningTime="2025-12-06 06:48:22.787661394 +0000 UTC m=+1404.073413354" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.021504 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6649b9bbc9-h8s24"] Dec 06 06:48:23 crc kubenswrapper[4823]: E1206 06:48:23.022409 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="199828c4-e1bd-42a8-b35c-ba26f4c980b8" containerName="mariadb-account-create-update" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.022424 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="199828c4-e1bd-42a8-b35c-ba26f4c980b8" containerName="mariadb-account-create-update" Dec 06 06:48:23 crc kubenswrapper[4823]: E1206 06:48:23.022439 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f95d7d5-1ff4-4b6b-9451-dd4511295eba" containerName="init" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.022446 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f95d7d5-1ff4-4b6b-9451-dd4511295eba" containerName="init" Dec 06 06:48:23 crc kubenswrapper[4823]: E1206 06:48:23.022466 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3b29187-c6a2-4b88-9215-759fe3cb8dad" containerName="mariadb-account-create-update" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.022472 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3b29187-c6a2-4b88-9215-759fe3cb8dad" containerName="mariadb-account-create-update" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.022701 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f95d7d5-1ff4-4b6b-9451-dd4511295eba" containerName="init" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.022727 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3b29187-c6a2-4b88-9215-759fe3cb8dad" containerName="mariadb-account-create-update" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.022739 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="199828c4-e1bd-42a8-b35c-ba26f4c980b8" containerName="mariadb-account-create-update" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.023943 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6649b9bbc9-h8s24" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.034337 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.034537 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.061997 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6649b9bbc9-h8s24"] Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.145878 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e2bf62ef-c456-4bf0-a670-a39f3b3a7079-config\") pod \"neutron-6649b9bbc9-h8s24\" (UID: \"e2bf62ef-c456-4bf0-a670-a39f3b3a7079\") " pod="openstack/neutron-6649b9bbc9-h8s24" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.145959 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdl9h\" (UniqueName: \"kubernetes.io/projected/e2bf62ef-c456-4bf0-a670-a39f3b3a7079-kube-api-access-mdl9h\") pod \"neutron-6649b9bbc9-h8s24\" (UID: \"e2bf62ef-c456-4bf0-a670-a39f3b3a7079\") " pod="openstack/neutron-6649b9bbc9-h8s24" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.146002 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bf62ef-c456-4bf0-a670-a39f3b3a7079-internal-tls-certs\") pod \"neutron-6649b9bbc9-h8s24\" (UID: \"e2bf62ef-c456-4bf0-a670-a39f3b3a7079\") " pod="openstack/neutron-6649b9bbc9-h8s24" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.146041 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bf62ef-c456-4bf0-a670-a39f3b3a7079-ovndb-tls-certs\") pod \"neutron-6649b9bbc9-h8s24\" (UID: \"e2bf62ef-c456-4bf0-a670-a39f3b3a7079\") " pod="openstack/neutron-6649b9bbc9-h8s24" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.146103 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e2bf62ef-c456-4bf0-a670-a39f3b3a7079-httpd-config\") pod \"neutron-6649b9bbc9-h8s24\" (UID: \"e2bf62ef-c456-4bf0-a670-a39f3b3a7079\") " pod="openstack/neutron-6649b9bbc9-h8s24" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.146131 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bf62ef-c456-4bf0-a670-a39f3b3a7079-public-tls-certs\") pod \"neutron-6649b9bbc9-h8s24\" (UID: \"e2bf62ef-c456-4bf0-a670-a39f3b3a7079\") " pod="openstack/neutron-6649b9bbc9-h8s24" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.146213 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2bf62ef-c456-4bf0-a670-a39f3b3a7079-combined-ca-bundle\") pod \"neutron-6649b9bbc9-h8s24\" (UID: \"e2bf62ef-c456-4bf0-a670-a39f3b3a7079\") " pod="openstack/neutron-6649b9bbc9-h8s24" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.165294 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f95d7d5-1ff4-4b6b-9451-dd4511295eba" path="/var/lib/kubelet/pods/6f95d7d5-1ff4-4b6b-9451-dd4511295eba/volumes" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.249730 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e2bf62ef-c456-4bf0-a670-a39f3b3a7079-config\") pod \"neutron-6649b9bbc9-h8s24\" (UID: \"e2bf62ef-c456-4bf0-a670-a39f3b3a7079\") " pod="openstack/neutron-6649b9bbc9-h8s24" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.249849 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdl9h\" (UniqueName: \"kubernetes.io/projected/e2bf62ef-c456-4bf0-a670-a39f3b3a7079-kube-api-access-mdl9h\") pod \"neutron-6649b9bbc9-h8s24\" (UID: \"e2bf62ef-c456-4bf0-a670-a39f3b3a7079\") " pod="openstack/neutron-6649b9bbc9-h8s24" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.250167 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bf62ef-c456-4bf0-a670-a39f3b3a7079-internal-tls-certs\") pod \"neutron-6649b9bbc9-h8s24\" (UID: \"e2bf62ef-c456-4bf0-a670-a39f3b3a7079\") " pod="openstack/neutron-6649b9bbc9-h8s24" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.250220 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bf62ef-c456-4bf0-a670-a39f3b3a7079-ovndb-tls-certs\") pod \"neutron-6649b9bbc9-h8s24\" (UID: \"e2bf62ef-c456-4bf0-a670-a39f3b3a7079\") " pod="openstack/neutron-6649b9bbc9-h8s24" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.250349 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e2bf62ef-c456-4bf0-a670-a39f3b3a7079-httpd-config\") pod \"neutron-6649b9bbc9-h8s24\" (UID: \"e2bf62ef-c456-4bf0-a670-a39f3b3a7079\") " pod="openstack/neutron-6649b9bbc9-h8s24" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.250381 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bf62ef-c456-4bf0-a670-a39f3b3a7079-public-tls-certs\") pod \"neutron-6649b9bbc9-h8s24\" (UID: \"e2bf62ef-c456-4bf0-a670-a39f3b3a7079\") " pod="openstack/neutron-6649b9bbc9-h8s24" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.250523 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2bf62ef-c456-4bf0-a670-a39f3b3a7079-combined-ca-bundle\") pod \"neutron-6649b9bbc9-h8s24\" (UID: \"e2bf62ef-c456-4bf0-a670-a39f3b3a7079\") " pod="openstack/neutron-6649b9bbc9-h8s24" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.272135 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e2bf62ef-c456-4bf0-a670-a39f3b3a7079-config\") pod \"neutron-6649b9bbc9-h8s24\" (UID: \"e2bf62ef-c456-4bf0-a670-a39f3b3a7079\") " pod="openstack/neutron-6649b9bbc9-h8s24" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.277240 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e2bf62ef-c456-4bf0-a670-a39f3b3a7079-httpd-config\") pod \"neutron-6649b9bbc9-h8s24\" (UID: \"e2bf62ef-c456-4bf0-a670-a39f3b3a7079\") " pod="openstack/neutron-6649b9bbc9-h8s24" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.279328 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdl9h\" (UniqueName: \"kubernetes.io/projected/e2bf62ef-c456-4bf0-a670-a39f3b3a7079-kube-api-access-mdl9h\") pod \"neutron-6649b9bbc9-h8s24\" (UID: \"e2bf62ef-c456-4bf0-a670-a39f3b3a7079\") " pod="openstack/neutron-6649b9bbc9-h8s24" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.281551 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bf62ef-c456-4bf0-a670-a39f3b3a7079-ovndb-tls-certs\") pod \"neutron-6649b9bbc9-h8s24\" (UID: \"e2bf62ef-c456-4bf0-a670-a39f3b3a7079\") " pod="openstack/neutron-6649b9bbc9-h8s24" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.284461 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bf62ef-c456-4bf0-a670-a39f3b3a7079-public-tls-certs\") pod \"neutron-6649b9bbc9-h8s24\" (UID: \"e2bf62ef-c456-4bf0-a670-a39f3b3a7079\") " pod="openstack/neutron-6649b9bbc9-h8s24" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.285086 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bf62ef-c456-4bf0-a670-a39f3b3a7079-internal-tls-certs\") pod \"neutron-6649b9bbc9-h8s24\" (UID: \"e2bf62ef-c456-4bf0-a670-a39f3b3a7079\") " pod="openstack/neutron-6649b9bbc9-h8s24" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.285185 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2bf62ef-c456-4bf0-a670-a39f3b3a7079-combined-ca-bundle\") pod \"neutron-6649b9bbc9-h8s24\" (UID: \"e2bf62ef-c456-4bf0-a670-a39f3b3a7079\") " pod="openstack/neutron-6649b9bbc9-h8s24" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.446078 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-f788j" event={"ID":"7930a7dd-359c-4d6d-9a66-de8eaa5f6f60","Type":"ContainerDied","Data":"d23ce9403ba831072755b20f8707bb1723ad6d7b4f985a30343ee7110dd40f0f"} Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.446135 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d23ce9403ba831072755b20f8707bb1723ad6d7b4f985a30343ee7110dd40f0f" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.452016 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"32dd3184-895e-4094-8360-5ffe3627daf2","Type":"ContainerStarted","Data":"279b6e3fbe24ffee338c299383537193605b04424c02847a9769ff210aad98a6"} Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.454030 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-abd5-account-create-update-4jjrw" event={"ID":"199828c4-e1bd-42a8-b35c-ba26f4c980b8","Type":"ContainerDied","Data":"be636bb7bb51621b737c617c3b4f47538030526e76901c94fbdf0b6c14707ae9"} Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.454193 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be636bb7bb51621b737c617c3b4f47538030526e76901c94fbdf0b6c14707ae9" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.454358 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-abd5-account-create-update-4jjrw" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.464052 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" event={"ID":"21e6e5f3-ac1d-48b6-871a-8a8d52cee775","Type":"ContainerStarted","Data":"4cb29bddf3ec97f64ac08a9e867469482fc248c696b890468eafdb5f00dacc87"} Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.465577 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.471061 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e9c7-account-create-update-mmhk9" event={"ID":"d3b29187-c6a2-4b88-9215-759fe3cb8dad","Type":"ContainerDied","Data":"ed4564e0772f5ba97e5fca2d4f39e43d101854cacf3cb82296ff12c687f33974"} Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.471111 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed4564e0772f5ba97e5fca2d4f39e43d101854cacf3cb82296ff12c687f33974" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.471192 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e9c7-account-create-update-mmhk9" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.487694 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" podStartSLOduration=6.487655287 podStartE2EDuration="6.487655287s" podCreationTimestamp="2025-12-06 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:48:23.485588928 +0000 UTC m=+1404.771340898" watchObservedRunningTime="2025-12-06 06:48:23.487655287 +0000 UTC m=+1404.773407247" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.488292 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"873b532b-a51c-43b7-87bd-6d80634122b7","Type":"ContainerStarted","Data":"b8205e516aa5e66fcda7c3db023940317641383a0a4712958faa0784b0bfecaa"} Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.498349 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qdzrg" event={"ID":"afb8dbce-6f68-4245-971e-e9087ed93cf1","Type":"ContainerDied","Data":"8567e59483d764c75a278b01ea267bacab38b7c04024658dae2e437d1a6528a3"} Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.498400 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8567e59483d764c75a278b01ea267bacab38b7c04024658dae2e437d1a6528a3" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.752123 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6649b9bbc9-h8s24" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.820534 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qdzrg" Dec 06 06:48:23 crc kubenswrapper[4823]: I1206 06:48:23.858464 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-f788j" Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.015307 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjfz9\" (UniqueName: \"kubernetes.io/projected/afb8dbce-6f68-4245-971e-e9087ed93cf1-kube-api-access-tjfz9\") pod \"afb8dbce-6f68-4245-971e-e9087ed93cf1\" (UID: \"afb8dbce-6f68-4245-971e-e9087ed93cf1\") " Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.015438 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54f72\" (UniqueName: \"kubernetes.io/projected/7930a7dd-359c-4d6d-9a66-de8eaa5f6f60-kube-api-access-54f72\") pod \"7930a7dd-359c-4d6d-9a66-de8eaa5f6f60\" (UID: \"7930a7dd-359c-4d6d-9a66-de8eaa5f6f60\") " Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.015746 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7930a7dd-359c-4d6d-9a66-de8eaa5f6f60-operator-scripts\") pod \"7930a7dd-359c-4d6d-9a66-de8eaa5f6f60\" (UID: \"7930a7dd-359c-4d6d-9a66-de8eaa5f6f60\") " Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.015793 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afb8dbce-6f68-4245-971e-e9087ed93cf1-operator-scripts\") pod \"afb8dbce-6f68-4245-971e-e9087ed93cf1\" (UID: \"afb8dbce-6f68-4245-971e-e9087ed93cf1\") " Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.017497 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afb8dbce-6f68-4245-971e-e9087ed93cf1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "afb8dbce-6f68-4245-971e-e9087ed93cf1" (UID: "afb8dbce-6f68-4245-971e-e9087ed93cf1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.017954 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7930a7dd-359c-4d6d-9a66-de8eaa5f6f60-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7930a7dd-359c-4d6d-9a66-de8eaa5f6f60" (UID: "7930a7dd-359c-4d6d-9a66-de8eaa5f6f60"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.119854 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7930a7dd-359c-4d6d-9a66-de8eaa5f6f60-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.119917 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afb8dbce-6f68-4245-971e-e9087ed93cf1-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.195956 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7930a7dd-359c-4d6d-9a66-de8eaa5f6f60-kube-api-access-54f72" (OuterVolumeSpecName: "kube-api-access-54f72") pod "7930a7dd-359c-4d6d-9a66-de8eaa5f6f60" (UID: "7930a7dd-359c-4d6d-9a66-de8eaa5f6f60"). InnerVolumeSpecName "kube-api-access-54f72". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.199439 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afb8dbce-6f68-4245-971e-e9087ed93cf1-kube-api-access-tjfz9" (OuterVolumeSpecName: "kube-api-access-tjfz9") pod "afb8dbce-6f68-4245-971e-e9087ed93cf1" (UID: "afb8dbce-6f68-4245-971e-e9087ed93cf1"). InnerVolumeSpecName "kube-api-access-tjfz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.224489 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjfz9\" (UniqueName: \"kubernetes.io/projected/afb8dbce-6f68-4245-971e-e9087ed93cf1-kube-api-access-tjfz9\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.224569 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54f72\" (UniqueName: \"kubernetes.io/projected/7930a7dd-359c-4d6d-9a66-de8eaa5f6f60-kube-api-access-54f72\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.308119 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2pbnm" Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.428568 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20c6e0b9-0c53-43ea-a471-9076b51f877b-operator-scripts\") pod \"20c6e0b9-0c53-43ea-a471-9076b51f877b\" (UID: \"20c6e0b9-0c53-43ea-a471-9076b51f877b\") " Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.428846 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc4rj\" (UniqueName: \"kubernetes.io/projected/20c6e0b9-0c53-43ea-a471-9076b51f877b-kube-api-access-qc4rj\") pod \"20c6e0b9-0c53-43ea-a471-9076b51f877b\" (UID: \"20c6e0b9-0c53-43ea-a471-9076b51f877b\") " Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.431205 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20c6e0b9-0c53-43ea-a471-9076b51f877b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "20c6e0b9-0c53-43ea-a471-9076b51f877b" (UID: "20c6e0b9-0c53-43ea-a471-9076b51f877b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.464346 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20c6e0b9-0c53-43ea-a471-9076b51f877b-kube-api-access-qc4rj" (OuterVolumeSpecName: "kube-api-access-qc4rj") pod "20c6e0b9-0c53-43ea-a471-9076b51f877b" (UID: "20c6e0b9-0c53-43ea-a471-9076b51f877b"). InnerVolumeSpecName "kube-api-access-qc4rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.531937 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20c6e0b9-0c53-43ea-a471-9076b51f877b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.531985 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc4rj\" (UniqueName: \"kubernetes.io/projected/20c6e0b9-0c53-43ea-a471-9076b51f877b-kube-api-access-qc4rj\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.662999 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5f48-account-create-update-9428s" Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.695885 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6rbl" event={"ID":"06a9a9aa-962e-4cf1-afe9-b831a56f3837","Type":"ContainerStarted","Data":"ca79f5c53bc35cea9cdd800b13b35e7845a2f3cec859d3f9d0a9661b9229449e"} Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.712835 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2pbnm" Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.712857 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-2pbnm" event={"ID":"20c6e0b9-0c53-43ea-a471-9076b51f877b","Type":"ContainerDied","Data":"754ca6836515becaea394b66c815b51edc374b807ecc2806f615c091d8b85716"} Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.712911 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="754ca6836515becaea394b66c815b51edc374b807ecc2806f615c091d8b85716" Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.718000 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9b5dc60b-23c7-4e50-8944-3917a44ad224","Type":"ContainerStarted","Data":"44f8959f28d5a787430903f93056af1ad85ce7efd3d5f643a25753f2938ff99f"} Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.736886 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qdzrg" Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.736911 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-f788j" Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.847564 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0439c056-347b-4a05-95aa-e85289754ecc-operator-scripts\") pod \"0439c056-347b-4a05-95aa-e85289754ecc\" (UID: \"0439c056-347b-4a05-95aa-e85289754ecc\") " Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.847777 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv78d\" (UniqueName: \"kubernetes.io/projected/0439c056-347b-4a05-95aa-e85289754ecc-kube-api-access-nv78d\") pod \"0439c056-347b-4a05-95aa-e85289754ecc\" (UID: \"0439c056-347b-4a05-95aa-e85289754ecc\") " Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.850992 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0439c056-347b-4a05-95aa-e85289754ecc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0439c056-347b-4a05-95aa-e85289754ecc" (UID: "0439c056-347b-4a05-95aa-e85289754ecc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.863410 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0439c056-347b-4a05-95aa-e85289754ecc-kube-api-access-nv78d" (OuterVolumeSpecName: "kube-api-access-nv78d") pod "0439c056-347b-4a05-95aa-e85289754ecc" (UID: "0439c056-347b-4a05-95aa-e85289754ecc"). InnerVolumeSpecName "kube-api-access-nv78d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.952471 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv78d\" (UniqueName: \"kubernetes.io/projected/0439c056-347b-4a05-95aa-e85289754ecc-kube-api-access-nv78d\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:24 crc kubenswrapper[4823]: I1206 06:48:24.952514 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0439c056-347b-4a05-95aa-e85289754ecc-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:25 crc kubenswrapper[4823]: I1206 06:48:25.176584 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6649b9bbc9-h8s24"] Dec 06 06:48:25 crc kubenswrapper[4823]: I1206 06:48:25.748740 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"873b532b-a51c-43b7-87bd-6d80634122b7","Type":"ContainerStarted","Data":"7f11c9b180639c6417312c36018447daae6df612bb3c997503b542afd764fdc7"} Dec 06 06:48:25 crc kubenswrapper[4823]: I1206 06:48:25.748897 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="873b532b-a51c-43b7-87bd-6d80634122b7" containerName="glance-log" containerID="cri-o://b8205e516aa5e66fcda7c3db023940317641383a0a4712958faa0784b0bfecaa" gracePeriod=30 Dec 06 06:48:25 crc kubenswrapper[4823]: I1206 06:48:25.748992 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="873b532b-a51c-43b7-87bd-6d80634122b7" containerName="glance-httpd" containerID="cri-o://7f11c9b180639c6417312c36018447daae6df612bb3c997503b542afd764fdc7" gracePeriod=30 Dec 06 06:48:25 crc kubenswrapper[4823]: I1206 06:48:25.751224 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5f48-account-create-update-9428s" event={"ID":"0439c056-347b-4a05-95aa-e85289754ecc","Type":"ContainerDied","Data":"3b9280346d195efdff10a8e82159cbf5322b2a3fa131576c25838c4a7020ee69"} Dec 06 06:48:25 crc kubenswrapper[4823]: I1206 06:48:25.751261 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b9280346d195efdff10a8e82159cbf5322b2a3fa131576c25838c4a7020ee69" Dec 06 06:48:25 crc kubenswrapper[4823]: I1206 06:48:25.751301 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5f48-account-create-update-9428s" Dec 06 06:48:25 crc kubenswrapper[4823]: I1206 06:48:25.753918 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e9a933e-7013-4daa-92f7-8809b5f09042","Type":"ContainerStarted","Data":"e3d2501e9105dbe87d709924d8f27c3eb0d31375a234f0ff1763e0c93fd2c197"} Dec 06 06:48:25 crc kubenswrapper[4823]: I1206 06:48:25.756215 4823 generic.go:334] "Generic (PLEG): container finished" podID="06a9a9aa-962e-4cf1-afe9-b831a56f3837" containerID="ca79f5c53bc35cea9cdd800b13b35e7845a2f3cec859d3f9d0a9661b9229449e" exitCode=0 Dec 06 06:48:25 crc kubenswrapper[4823]: I1206 06:48:25.756737 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6rbl" event={"ID":"06a9a9aa-962e-4cf1-afe9-b831a56f3837","Type":"ContainerDied","Data":"ca79f5c53bc35cea9cdd800b13b35e7845a2f3cec859d3f9d0a9661b9229449e"} Dec 06 06:48:25 crc kubenswrapper[4823]: I1206 06:48:25.758871 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6649b9bbc9-h8s24" event={"ID":"e2bf62ef-c456-4bf0-a670-a39f3b3a7079","Type":"ContainerStarted","Data":"9452af138fc6e5ba2be8321aa04e1f3e7d219aa6b22de339c9ae397b8c0a6c03"} Dec 06 06:48:25 crc kubenswrapper[4823]: I1206 06:48:25.791524 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.791500336 podStartE2EDuration="9.791500336s" podCreationTimestamp="2025-12-06 06:48:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:48:25.777008848 +0000 UTC m=+1407.062760798" watchObservedRunningTime="2025-12-06 06:48:25.791500336 +0000 UTC m=+1407.077252296" Dec 06 06:48:26 crc kubenswrapper[4823]: I1206 06:48:26.650213 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:26 crc kubenswrapper[4823]: I1206 06:48:26.655561 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6bcdffb5bf-b97n9" Dec 06 06:48:26 crc kubenswrapper[4823]: I1206 06:48:26.854221 4823 generic.go:334] "Generic (PLEG): container finished" podID="56b86320-1d45-4a65-8be8-b2f0d6a6395e" containerID="552ac67794564e022b04a79839d88acc87f37a4568bb8747cf005e4a096f7726" exitCode=137 Dec 06 06:48:26 crc kubenswrapper[4823]: I1206 06:48:26.854711 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"56b86320-1d45-4a65-8be8-b2f0d6a6395e","Type":"ContainerDied","Data":"552ac67794564e022b04a79839d88acc87f37a4568bb8747cf005e4a096f7726"} Dec 06 06:48:26 crc kubenswrapper[4823]: I1206 06:48:26.900547 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9b5dc60b-23c7-4e50-8944-3917a44ad224","Type":"ContainerStarted","Data":"da7ad022bf5a718e2b76df9db268753bac7236a5fc9f01928f85a0b1ef721bab"} Dec 06 06:48:26 crc kubenswrapper[4823]: I1206 06:48:26.972300 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6649b9bbc9-h8s24" event={"ID":"e2bf62ef-c456-4bf0-a670-a39f3b3a7079","Type":"ContainerStarted","Data":"b3b49f1b2b097393e5d4b728889dc11d8328ba451456f10023cce1e85eddfd6e"} Dec 06 06:48:26 crc kubenswrapper[4823]: I1206 06:48:26.972581 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6649b9bbc9-h8s24" event={"ID":"e2bf62ef-c456-4bf0-a670-a39f3b3a7079","Type":"ContainerStarted","Data":"4c318008260ea08537ca59fd86645fc9bac96725b52b8848aa1b95f690050180"} Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:26.990811 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=9.990789145 podStartE2EDuration="9.990789145s" podCreationTimestamp="2025-12-06 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:48:26.961146649 +0000 UTC m=+1408.246898609" watchObservedRunningTime="2025-12-06 06:48:26.990789145 +0000 UTC m=+1408.276541105" Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.030213 4823 generic.go:334] "Generic (PLEG): container finished" podID="873b532b-a51c-43b7-87bd-6d80634122b7" containerID="7f11c9b180639c6417312c36018447daae6df612bb3c997503b542afd764fdc7" exitCode=143 Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.030257 4823 generic.go:334] "Generic (PLEG): container finished" podID="873b532b-a51c-43b7-87bd-6d80634122b7" containerID="b8205e516aa5e66fcda7c3db023940317641383a0a4712958faa0784b0bfecaa" exitCode=143 Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.030363 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"873b532b-a51c-43b7-87bd-6d80634122b7","Type":"ContainerDied","Data":"7f11c9b180639c6417312c36018447daae6df612bb3c997503b542afd764fdc7"} Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.030402 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"873b532b-a51c-43b7-87bd-6d80634122b7","Type":"ContainerDied","Data":"b8205e516aa5e66fcda7c3db023940317641383a0a4712958faa0784b0bfecaa"} Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.060826 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"32dd3184-895e-4094-8360-5ffe3627daf2","Type":"ContainerStarted","Data":"1bf645f236cfa8259705769e1c6a2fa89a4a14fa0b9e6a858221f8c19648be71"} Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.061295 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="32dd3184-895e-4094-8360-5ffe3627daf2" containerName="glance-log" containerID="cri-o://279b6e3fbe24ffee338c299383537193605b04424c02847a9769ff210aad98a6" gracePeriod=30 Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.062646 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="32dd3184-895e-4094-8360-5ffe3627daf2" containerName="glance-httpd" containerID="cri-o://1bf645f236cfa8259705769e1c6a2fa89a4a14fa0b9e6a858221f8c19648be71" gracePeriod=30 Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.095959 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=10.095925322 podStartE2EDuration="10.095925322s" podCreationTimestamp="2025-12-06 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:48:27.083267947 +0000 UTC m=+1408.369019927" watchObservedRunningTime="2025-12-06 06:48:27.095925322 +0000 UTC m=+1408.381677292" Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.403334 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.512010 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873b532b-a51c-43b7-87bd-6d80634122b7-combined-ca-bundle\") pod \"873b532b-a51c-43b7-87bd-6d80634122b7\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.512441 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/873b532b-a51c-43b7-87bd-6d80634122b7-httpd-run\") pod \"873b532b-a51c-43b7-87bd-6d80634122b7\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.512734 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/873b532b-a51c-43b7-87bd-6d80634122b7-scripts\") pod \"873b532b-a51c-43b7-87bd-6d80634122b7\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.512806 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/873b532b-a51c-43b7-87bd-6d80634122b7-config-data\") pod \"873b532b-a51c-43b7-87bd-6d80634122b7\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.512844 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgnvj\" (UniqueName: \"kubernetes.io/projected/873b532b-a51c-43b7-87bd-6d80634122b7-kube-api-access-qgnvj\") pod \"873b532b-a51c-43b7-87bd-6d80634122b7\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.512882 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/873b532b-a51c-43b7-87bd-6d80634122b7-logs\") pod \"873b532b-a51c-43b7-87bd-6d80634122b7\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.512962 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"873b532b-a51c-43b7-87bd-6d80634122b7\" (UID: \"873b532b-a51c-43b7-87bd-6d80634122b7\") " Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.517340 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/873b532b-a51c-43b7-87bd-6d80634122b7-logs" (OuterVolumeSpecName: "logs") pod "873b532b-a51c-43b7-87bd-6d80634122b7" (UID: "873b532b-a51c-43b7-87bd-6d80634122b7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.517745 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/873b532b-a51c-43b7-87bd-6d80634122b7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "873b532b-a51c-43b7-87bd-6d80634122b7" (UID: "873b532b-a51c-43b7-87bd-6d80634122b7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.536510 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/873b532b-a51c-43b7-87bd-6d80634122b7-scripts" (OuterVolumeSpecName: "scripts") pod "873b532b-a51c-43b7-87bd-6d80634122b7" (UID: "873b532b-a51c-43b7-87bd-6d80634122b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.545762 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/873b532b-a51c-43b7-87bd-6d80634122b7-kube-api-access-qgnvj" (OuterVolumeSpecName: "kube-api-access-qgnvj") pod "873b532b-a51c-43b7-87bd-6d80634122b7" (UID: "873b532b-a51c-43b7-87bd-6d80634122b7"). InnerVolumeSpecName "kube-api-access-qgnvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.560877 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "873b532b-a51c-43b7-87bd-6d80634122b7" (UID: "873b532b-a51c-43b7-87bd-6d80634122b7"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.587995 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/873b532b-a51c-43b7-87bd-6d80634122b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "873b532b-a51c-43b7-87bd-6d80634122b7" (UID: "873b532b-a51c-43b7-87bd-6d80634122b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.616635 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgnvj\" (UniqueName: \"kubernetes.io/projected/873b532b-a51c-43b7-87bd-6d80634122b7-kube-api-access-qgnvj\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.616704 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/873b532b-a51c-43b7-87bd-6d80634122b7-logs\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.616745 4823 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.616758 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873b532b-a51c-43b7-87bd-6d80634122b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.616793 4823 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/873b532b-a51c-43b7-87bd-6d80634122b7-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.616804 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/873b532b-a51c-43b7-87bd-6d80634122b7-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.640909 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/873b532b-a51c-43b7-87bd-6d80634122b7-config-data" (OuterVolumeSpecName: "config-data") pod "873b532b-a51c-43b7-87bd-6d80634122b7" (UID: "873b532b-a51c-43b7-87bd-6d80634122b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.655784 4823 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.719018 4823 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.719070 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/873b532b-a51c-43b7-87bd-6d80634122b7-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:27 crc kubenswrapper[4823]: I1206 06:48:27.943826 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.037427 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6mg4v"] Dec 06 06:48:28 crc kubenswrapper[4823]: E1206 06:48:28.038251 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7930a7dd-359c-4d6d-9a66-de8eaa5f6f60" containerName="mariadb-database-create" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.038277 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7930a7dd-359c-4d6d-9a66-de8eaa5f6f60" containerName="mariadb-database-create" Dec 06 06:48:28 crc kubenswrapper[4823]: E1206 06:48:28.038303 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="873b532b-a51c-43b7-87bd-6d80634122b7" containerName="glance-httpd" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.038313 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="873b532b-a51c-43b7-87bd-6d80634122b7" containerName="glance-httpd" Dec 06 06:48:28 crc kubenswrapper[4823]: E1206 06:48:28.038338 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="873b532b-a51c-43b7-87bd-6d80634122b7" containerName="glance-log" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.038347 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="873b532b-a51c-43b7-87bd-6d80634122b7" containerName="glance-log" Dec 06 06:48:28 crc kubenswrapper[4823]: E1206 06:48:28.038386 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20c6e0b9-0c53-43ea-a471-9076b51f877b" containerName="mariadb-database-create" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.038394 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="20c6e0b9-0c53-43ea-a471-9076b51f877b" containerName="mariadb-database-create" Dec 06 06:48:28 crc kubenswrapper[4823]: E1206 06:48:28.038403 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0439c056-347b-4a05-95aa-e85289754ecc" containerName="mariadb-account-create-update" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.038413 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0439c056-347b-4a05-95aa-e85289754ecc" containerName="mariadb-account-create-update" Dec 06 06:48:28 crc kubenswrapper[4823]: E1206 06:48:28.038432 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afb8dbce-6f68-4245-971e-e9087ed93cf1" containerName="mariadb-database-create" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.038441 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="afb8dbce-6f68-4245-971e-e9087ed93cf1" containerName="mariadb-database-create" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.038712 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="0439c056-347b-4a05-95aa-e85289754ecc" containerName="mariadb-account-create-update" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.038737 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7930a7dd-359c-4d6d-9a66-de8eaa5f6f60" containerName="mariadb-database-create" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.038750 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="873b532b-a51c-43b7-87bd-6d80634122b7" containerName="glance-httpd" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.038767 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="873b532b-a51c-43b7-87bd-6d80634122b7" containerName="glance-log" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.038782 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="afb8dbce-6f68-4245-971e-e9087ed93cf1" containerName="mariadb-database-create" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.038795 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="20c6e0b9-0c53-43ea-a471-9076b51f877b" containerName="mariadb-database-create" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.039765 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6mg4v" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.047894 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.048146 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-jlwgd" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.048298 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.076797 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6mg4v"] Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.098848 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b9dc84c57-c6khp"] Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.099189 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" podUID="879441cf-44a7-458b-8cfe-1ac422e1f34d" containerName="dnsmasq-dns" containerID="cri-o://aaffe498cbaa476bc7ccbc4e66f85277575a3a946b142c976544dd6cfabf5f92" gracePeriod=10 Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.099859 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"873b532b-a51c-43b7-87bd-6d80634122b7","Type":"ContainerDied","Data":"70e9f3e72753605bad3132bf8e595b16d60787c14c10a47af5f60cf8b02af169"} Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.099923 4823 scope.go:117] "RemoveContainer" containerID="7f11c9b180639c6417312c36018447daae6df612bb3c997503b542afd764fdc7" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.100022 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.125788 4823 generic.go:334] "Generic (PLEG): container finished" podID="738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" containerID="0361cfd70d4afe6b8321a2257452d1edf130b9372a99ab20db5c162575da66fd" exitCode=1 Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.125882 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24","Type":"ContainerDied","Data":"0361cfd70d4afe6b8321a2257452d1edf130b9372a99ab20db5c162575da66fd"} Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.126811 4823 scope.go:117] "RemoveContainer" containerID="0361cfd70d4afe6b8321a2257452d1edf130b9372a99ab20db5c162575da66fd" Dec 06 06:48:28 crc kubenswrapper[4823]: E1206 06:48:28.127357 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24)\"" pod="openstack/watcher-decision-engine-0" podUID="738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.131542 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d768b1-0912-4fb7-8bc8-408233b3af09-config-data\") pod \"nova-cell0-conductor-db-sync-6mg4v\" (UID: \"e2d768b1-0912-4fb7-8bc8-408233b3af09\") " pod="openstack/nova-cell0-conductor-db-sync-6mg4v" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.131623 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v46kl\" (UniqueName: \"kubernetes.io/projected/e2d768b1-0912-4fb7-8bc8-408233b3af09-kube-api-access-v46kl\") pod \"nova-cell0-conductor-db-sync-6mg4v\" (UID: \"e2d768b1-0912-4fb7-8bc8-408233b3af09\") " pod="openstack/nova-cell0-conductor-db-sync-6mg4v" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.131698 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d768b1-0912-4fb7-8bc8-408233b3af09-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6mg4v\" (UID: \"e2d768b1-0912-4fb7-8bc8-408233b3af09\") " pod="openstack/nova-cell0-conductor-db-sync-6mg4v" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.131791 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2d768b1-0912-4fb7-8bc8-408233b3af09-scripts\") pod \"nova-cell0-conductor-db-sync-6mg4v\" (UID: \"e2d768b1-0912-4fb7-8bc8-408233b3af09\") " pod="openstack/nova-cell0-conductor-db-sync-6mg4v" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.164050 4823 generic.go:334] "Generic (PLEG): container finished" podID="32dd3184-895e-4094-8360-5ffe3627daf2" containerID="1bf645f236cfa8259705769e1c6a2fa89a4a14fa0b9e6a858221f8c19648be71" exitCode=143 Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.164100 4823 generic.go:334] "Generic (PLEG): container finished" podID="32dd3184-895e-4094-8360-5ffe3627daf2" containerID="279b6e3fbe24ffee338c299383537193605b04424c02847a9769ff210aad98a6" exitCode=143 Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.164624 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"32dd3184-895e-4094-8360-5ffe3627daf2","Type":"ContainerDied","Data":"1bf645f236cfa8259705769e1c6a2fa89a4a14fa0b9e6a858221f8c19648be71"} Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.164715 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"32dd3184-895e-4094-8360-5ffe3627daf2","Type":"ContainerDied","Data":"279b6e3fbe24ffee338c299383537193605b04424c02847a9769ff210aad98a6"} Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.165078 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6649b9bbc9-h8s24" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.198449 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.226524 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.235771 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d768b1-0912-4fb7-8bc8-408233b3af09-config-data\") pod \"nova-cell0-conductor-db-sync-6mg4v\" (UID: \"e2d768b1-0912-4fb7-8bc8-408233b3af09\") " pod="openstack/nova-cell0-conductor-db-sync-6mg4v" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.235849 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v46kl\" (UniqueName: \"kubernetes.io/projected/e2d768b1-0912-4fb7-8bc8-408233b3af09-kube-api-access-v46kl\") pod \"nova-cell0-conductor-db-sync-6mg4v\" (UID: \"e2d768b1-0912-4fb7-8bc8-408233b3af09\") " pod="openstack/nova-cell0-conductor-db-sync-6mg4v" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.235956 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d768b1-0912-4fb7-8bc8-408233b3af09-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6mg4v\" (UID: \"e2d768b1-0912-4fb7-8bc8-408233b3af09\") " pod="openstack/nova-cell0-conductor-db-sync-6mg4v" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.236155 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2d768b1-0912-4fb7-8bc8-408233b3af09-scripts\") pod \"nova-cell0-conductor-db-sync-6mg4v\" (UID: \"e2d768b1-0912-4fb7-8bc8-408233b3af09\") " pod="openstack/nova-cell0-conductor-db-sync-6mg4v" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.265854 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d768b1-0912-4fb7-8bc8-408233b3af09-config-data\") pod \"nova-cell0-conductor-db-sync-6mg4v\" (UID: \"e2d768b1-0912-4fb7-8bc8-408233b3af09\") " pod="openstack/nova-cell0-conductor-db-sync-6mg4v" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.273848 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d768b1-0912-4fb7-8bc8-408233b3af09-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6mg4v\" (UID: \"e2d768b1-0912-4fb7-8bc8-408233b3af09\") " pod="openstack/nova-cell0-conductor-db-sync-6mg4v" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.285292 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v46kl\" (UniqueName: \"kubernetes.io/projected/e2d768b1-0912-4fb7-8bc8-408233b3af09-kube-api-access-v46kl\") pod \"nova-cell0-conductor-db-sync-6mg4v\" (UID: \"e2d768b1-0912-4fb7-8bc8-408233b3af09\") " pod="openstack/nova-cell0-conductor-db-sync-6mg4v" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.299901 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2d768b1-0912-4fb7-8bc8-408233b3af09-scripts\") pod \"nova-cell0-conductor-db-sync-6mg4v\" (UID: \"e2d768b1-0912-4fb7-8bc8-408233b3af09\") " pod="openstack/nova-cell0-conductor-db-sync-6mg4v" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.347332 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.360536 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.375077 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6649b9bbc9-h8s24" podStartSLOduration=6.375049646 podStartE2EDuration="6.375049646s" podCreationTimestamp="2025-12-06 06:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:48:28.233783845 +0000 UTC m=+1409.519535795" watchObservedRunningTime="2025-12-06 06:48:28.375049646 +0000 UTC m=+1409.660801606" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.381391 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.381577 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.383260 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6mg4v" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.431229 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.445685 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd98c1da-8857-4823-8887-4a9d6e405359-logs\") pod \"glance-default-external-api-0\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.445745 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.445780 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q7pk\" (UniqueName: \"kubernetes.io/projected/cd98c1da-8857-4823-8887-4a9d6e405359-kube-api-access-2q7pk\") pod \"glance-default-external-api-0\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.445821 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd98c1da-8857-4823-8887-4a9d6e405359-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.445872 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd98c1da-8857-4823-8887-4a9d6e405359-config-data\") pod \"glance-default-external-api-0\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.445911 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd98c1da-8857-4823-8887-4a9d6e405359-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.445956 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd98c1da-8857-4823-8887-4a9d6e405359-scripts\") pod \"glance-default-external-api-0\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.445991 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cd98c1da-8857-4823-8887-4a9d6e405359-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.550102 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cd98c1da-8857-4823-8887-4a9d6e405359-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.550297 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd98c1da-8857-4823-8887-4a9d6e405359-logs\") pod \"glance-default-external-api-0\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.550336 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.550366 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q7pk\" (UniqueName: \"kubernetes.io/projected/cd98c1da-8857-4823-8887-4a9d6e405359-kube-api-access-2q7pk\") pod \"glance-default-external-api-0\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.550411 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd98c1da-8857-4823-8887-4a9d6e405359-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.550453 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd98c1da-8857-4823-8887-4a9d6e405359-config-data\") pod \"glance-default-external-api-0\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.550488 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd98c1da-8857-4823-8887-4a9d6e405359-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.550544 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd98c1da-8857-4823-8887-4a9d6e405359-scripts\") pod \"glance-default-external-api-0\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.552466 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd98c1da-8857-4823-8887-4a9d6e405359-logs\") pod \"glance-default-external-api-0\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.557295 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.557908 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cd98c1da-8857-4823-8887-4a9d6e405359-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.560305 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd98c1da-8857-4823-8887-4a9d6e405359-scripts\") pod \"glance-default-external-api-0\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.562090 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd98c1da-8857-4823-8887-4a9d6e405359-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.566548 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd98c1da-8857-4823-8887-4a9d6e405359-config-data\") pod \"glance-default-external-api-0\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.566588 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd98c1da-8857-4823-8887-4a9d6e405359-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.576541 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q7pk\" (UniqueName: \"kubernetes.io/projected/cd98c1da-8857-4823-8887-4a9d6e405359-kube-api-access-2q7pk\") pod \"glance-default-external-api-0\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.598678 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.753021 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 06:48:28 crc kubenswrapper[4823]: I1206 06:48:28.889934 4823 scope.go:117] "RemoveContainer" containerID="b8205e516aa5e66fcda7c3db023940317641383a0a4712958faa0784b0bfecaa" Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.162985 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="873b532b-a51c-43b7-87bd-6d80634122b7" path="/var/lib/kubelet/pods/873b532b-a51c-43b7-87bd-6d80634122b7/volumes" Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.213284 4823 scope.go:117] "RemoveContainer" containerID="e331f4421044ffd6bb90b95a39cce22e9c826aec0947cf1211ff68f01deaa4f1" Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.214207 4823 generic.go:334] "Generic (PLEG): container finished" podID="879441cf-44a7-458b-8cfe-1ac422e1f34d" containerID="aaffe498cbaa476bc7ccbc4e66f85277575a3a946b142c976544dd6cfabf5f92" exitCode=0 Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.215652 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" event={"ID":"879441cf-44a7-458b-8cfe-1ac422e1f34d","Type":"ContainerDied","Data":"aaffe498cbaa476bc7ccbc4e66f85277575a3a946b142c976544dd6cfabf5f92"} Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.316246 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.372691 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lnw8\" (UniqueName: \"kubernetes.io/projected/56b86320-1d45-4a65-8be8-b2f0d6a6395e-kube-api-access-7lnw8\") pod \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.372778 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56b86320-1d45-4a65-8be8-b2f0d6a6395e-logs\") pod \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.372940 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56b86320-1d45-4a65-8be8-b2f0d6a6395e-scripts\") pod \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.373018 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56b86320-1d45-4a65-8be8-b2f0d6a6395e-config-data\") pod \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.373054 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/56b86320-1d45-4a65-8be8-b2f0d6a6395e-etc-machine-id\") pod \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.373113 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56b86320-1d45-4a65-8be8-b2f0d6a6395e-config-data-custom\") pod \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.373263 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56b86320-1d45-4a65-8be8-b2f0d6a6395e-combined-ca-bundle\") pod \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\" (UID: \"56b86320-1d45-4a65-8be8-b2f0d6a6395e\") " Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.373762 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56b86320-1d45-4a65-8be8-b2f0d6a6395e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "56b86320-1d45-4a65-8be8-b2f0d6a6395e" (UID: "56b86320-1d45-4a65-8be8-b2f0d6a6395e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.373914 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56b86320-1d45-4a65-8be8-b2f0d6a6395e-logs" (OuterVolumeSpecName: "logs") pod "56b86320-1d45-4a65-8be8-b2f0d6a6395e" (UID: "56b86320-1d45-4a65-8be8-b2f0d6a6395e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.374296 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56b86320-1d45-4a65-8be8-b2f0d6a6395e-logs\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.374314 4823 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/56b86320-1d45-4a65-8be8-b2f0d6a6395e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.379922 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56b86320-1d45-4a65-8be8-b2f0d6a6395e-kube-api-access-7lnw8" (OuterVolumeSpecName: "kube-api-access-7lnw8") pod "56b86320-1d45-4a65-8be8-b2f0d6a6395e" (UID: "56b86320-1d45-4a65-8be8-b2f0d6a6395e"). InnerVolumeSpecName "kube-api-access-7lnw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.381872 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56b86320-1d45-4a65-8be8-b2f0d6a6395e-scripts" (OuterVolumeSpecName: "scripts") pod "56b86320-1d45-4a65-8be8-b2f0d6a6395e" (UID: "56b86320-1d45-4a65-8be8-b2f0d6a6395e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.391912 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56b86320-1d45-4a65-8be8-b2f0d6a6395e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "56b86320-1d45-4a65-8be8-b2f0d6a6395e" (UID: "56b86320-1d45-4a65-8be8-b2f0d6a6395e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.423600 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56b86320-1d45-4a65-8be8-b2f0d6a6395e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56b86320-1d45-4a65-8be8-b2f0d6a6395e" (UID: "56b86320-1d45-4a65-8be8-b2f0d6a6395e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.476913 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56b86320-1d45-4a65-8be8-b2f0d6a6395e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.476954 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lnw8\" (UniqueName: \"kubernetes.io/projected/56b86320-1d45-4a65-8be8-b2f0d6a6395e-kube-api-access-7lnw8\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.476972 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56b86320-1d45-4a65-8be8-b2f0d6a6395e-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.476984 4823 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56b86320-1d45-4a65-8be8-b2f0d6a6395e-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.480842 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56b86320-1d45-4a65-8be8-b2f0d6a6395e-config-data" (OuterVolumeSpecName: "config-data") pod "56b86320-1d45-4a65-8be8-b2f0d6a6395e" (UID: "56b86320-1d45-4a65-8be8-b2f0d6a6395e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.584105 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56b86320-1d45-4a65-8be8-b2f0d6a6395e-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:29 crc kubenswrapper[4823]: I1206 06:48:29.884076 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.133830 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" podUID="879441cf-44a7-458b-8cfe-1ac422e1f34d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.172:5353: connect: connection refused" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.206015 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.206474 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.206720 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.207434 4823 scope.go:117] "RemoveContainer" containerID="0361cfd70d4afe6b8321a2257452d1edf130b9372a99ab20db5c162575da66fd" Dec 06 06:48:30 crc kubenswrapper[4823]: E1206 06:48:30.207777 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24)\"" pod="openstack/watcher-decision-engine-0" podUID="738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.231819 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.231843 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"56b86320-1d45-4a65-8be8-b2f0d6a6395e","Type":"ContainerDied","Data":"9ec87571fbe71a4dead32b81419de0af464000b2284825583a6e2cc4a21e079b"} Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.234784 4823 scope.go:117] "RemoveContainer" containerID="0361cfd70d4afe6b8321a2257452d1edf130b9372a99ab20db5c162575da66fd" Dec 06 06:48:30 crc kubenswrapper[4823]: E1206 06:48:30.235042 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24)\"" pod="openstack/watcher-decision-engine-0" podUID="738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.283686 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.304399 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.318123 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 06 06:48:30 crc kubenswrapper[4823]: E1206 06:48:30.318727 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56b86320-1d45-4a65-8be8-b2f0d6a6395e" containerName="cinder-api-log" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.318755 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="56b86320-1d45-4a65-8be8-b2f0d6a6395e" containerName="cinder-api-log" Dec 06 06:48:30 crc kubenswrapper[4823]: E1206 06:48:30.318816 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56b86320-1d45-4a65-8be8-b2f0d6a6395e" containerName="cinder-api" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.318826 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="56b86320-1d45-4a65-8be8-b2f0d6a6395e" containerName="cinder-api" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.319075 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="56b86320-1d45-4a65-8be8-b2f0d6a6395e" containerName="cinder-api-log" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.319106 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="56b86320-1d45-4a65-8be8-b2f0d6a6395e" containerName="cinder-api" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.320409 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.323391 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.323592 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.323757 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.352458 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.400234 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/533482ee-6e34-4054-85ed-96df7676e1ab-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.400314 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/533482ee-6e34-4054-85ed-96df7676e1ab-logs\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.400365 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533482ee-6e34-4054-85ed-96df7676e1ab-config-data\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.400526 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533482ee-6e34-4054-85ed-96df7676e1ab-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.400675 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/533482ee-6e34-4054-85ed-96df7676e1ab-etc-machine-id\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.400914 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/533482ee-6e34-4054-85ed-96df7676e1ab-config-data-custom\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.401028 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4gpf\" (UniqueName: \"kubernetes.io/projected/533482ee-6e34-4054-85ed-96df7676e1ab-kube-api-access-r4gpf\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.401201 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/533482ee-6e34-4054-85ed-96df7676e1ab-public-tls-certs\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.401446 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/533482ee-6e34-4054-85ed-96df7676e1ab-scripts\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.470018 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.509130 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/533482ee-6e34-4054-85ed-96df7676e1ab-etc-machine-id\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.509224 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/533482ee-6e34-4054-85ed-96df7676e1ab-config-data-custom\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.509241 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4gpf\" (UniqueName: \"kubernetes.io/projected/533482ee-6e34-4054-85ed-96df7676e1ab-kube-api-access-r4gpf\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.509258 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/533482ee-6e34-4054-85ed-96df7676e1ab-etc-machine-id\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.509268 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/533482ee-6e34-4054-85ed-96df7676e1ab-public-tls-certs\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.509361 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/533482ee-6e34-4054-85ed-96df7676e1ab-scripts\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.509403 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/533482ee-6e34-4054-85ed-96df7676e1ab-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.509439 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/533482ee-6e34-4054-85ed-96df7676e1ab-logs\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.509476 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533482ee-6e34-4054-85ed-96df7676e1ab-config-data\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.509570 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533482ee-6e34-4054-85ed-96df7676e1ab-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.511435 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/533482ee-6e34-4054-85ed-96df7676e1ab-logs\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.513883 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/533482ee-6e34-4054-85ed-96df7676e1ab-public-tls-certs\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.514434 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/533482ee-6e34-4054-85ed-96df7676e1ab-config-data-custom\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.520124 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/533482ee-6e34-4054-85ed-96df7676e1ab-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.522731 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/533482ee-6e34-4054-85ed-96df7676e1ab-scripts\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.523406 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533482ee-6e34-4054-85ed-96df7676e1ab-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.528683 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533482ee-6e34-4054-85ed-96df7676e1ab-config-data\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.535268 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4gpf\" (UniqueName: \"kubernetes.io/projected/533482ee-6e34-4054-85ed-96df7676e1ab-kube-api-access-r4gpf\") pod \"cinder-api-0\" (UID: \"533482ee-6e34-4054-85ed-96df7676e1ab\") " pod="openstack/cinder-api-0" Dec 06 06:48:30 crc kubenswrapper[4823]: I1206 06:48:30.643087 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.177526 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56b86320-1d45-4a65-8be8-b2f0d6a6395e" path="/var/lib/kubelet/pods/56b86320-1d45-4a65-8be8-b2f0d6a6395e/volumes" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.515999 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.596528 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.621403 4823 scope.go:117] "RemoveContainer" containerID="552ac67794564e022b04a79839d88acc87f37a4568bb8747cf005e4a096f7726" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.640936 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32dd3184-895e-4094-8360-5ffe3627daf2-logs\") pod \"32dd3184-895e-4094-8360-5ffe3627daf2\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.640980 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxn2l\" (UniqueName: \"kubernetes.io/projected/32dd3184-895e-4094-8360-5ffe3627daf2-kube-api-access-lxn2l\") pod \"32dd3184-895e-4094-8360-5ffe3627daf2\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.641037 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"32dd3184-895e-4094-8360-5ffe3627daf2\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.641175 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32dd3184-895e-4094-8360-5ffe3627daf2-combined-ca-bundle\") pod \"32dd3184-895e-4094-8360-5ffe3627daf2\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.641281 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32dd3184-895e-4094-8360-5ffe3627daf2-httpd-run\") pod \"32dd3184-895e-4094-8360-5ffe3627daf2\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.641307 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32dd3184-895e-4094-8360-5ffe3627daf2-scripts\") pod \"32dd3184-895e-4094-8360-5ffe3627daf2\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.641400 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32dd3184-895e-4094-8360-5ffe3627daf2-config-data\") pod \"32dd3184-895e-4094-8360-5ffe3627daf2\" (UID: \"32dd3184-895e-4094-8360-5ffe3627daf2\") " Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.643272 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32dd3184-895e-4094-8360-5ffe3627daf2-logs" (OuterVolumeSpecName: "logs") pod "32dd3184-895e-4094-8360-5ffe3627daf2" (UID: "32dd3184-895e-4094-8360-5ffe3627daf2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.651147 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32dd3184-895e-4094-8360-5ffe3627daf2-scripts" (OuterVolumeSpecName: "scripts") pod "32dd3184-895e-4094-8360-5ffe3627daf2" (UID: "32dd3184-895e-4094-8360-5ffe3627daf2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.656847 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "32dd3184-895e-4094-8360-5ffe3627daf2" (UID: "32dd3184-895e-4094-8360-5ffe3627daf2"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.668029 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32dd3184-895e-4094-8360-5ffe3627daf2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "32dd3184-895e-4094-8360-5ffe3627daf2" (UID: "32dd3184-895e-4094-8360-5ffe3627daf2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.694960 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32dd3184-895e-4094-8360-5ffe3627daf2-kube-api-access-lxn2l" (OuterVolumeSpecName: "kube-api-access-lxn2l") pod "32dd3184-895e-4094-8360-5ffe3627daf2" (UID: "32dd3184-895e-4094-8360-5ffe3627daf2"). InnerVolumeSpecName "kube-api-access-lxn2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.744053 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-dns-swift-storage-0\") pod \"879441cf-44a7-458b-8cfe-1ac422e1f34d\" (UID: \"879441cf-44a7-458b-8cfe-1ac422e1f34d\") " Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.744135 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-config\") pod \"879441cf-44a7-458b-8cfe-1ac422e1f34d\" (UID: \"879441cf-44a7-458b-8cfe-1ac422e1f34d\") " Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.744186 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-dns-svc\") pod \"879441cf-44a7-458b-8cfe-1ac422e1f34d\" (UID: \"879441cf-44a7-458b-8cfe-1ac422e1f34d\") " Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.744249 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-ovsdbserver-nb\") pod \"879441cf-44a7-458b-8cfe-1ac422e1f34d\" (UID: \"879441cf-44a7-458b-8cfe-1ac422e1f34d\") " Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.744375 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-ovsdbserver-sb\") pod \"879441cf-44a7-458b-8cfe-1ac422e1f34d\" (UID: \"879441cf-44a7-458b-8cfe-1ac422e1f34d\") " Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.744413 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnr5x\" (UniqueName: \"kubernetes.io/projected/879441cf-44a7-458b-8cfe-1ac422e1f34d-kube-api-access-mnr5x\") pod \"879441cf-44a7-458b-8cfe-1ac422e1f34d\" (UID: \"879441cf-44a7-458b-8cfe-1ac422e1f34d\") " Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.745146 4823 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32dd3184-895e-4094-8360-5ffe3627daf2-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.745165 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32dd3184-895e-4094-8360-5ffe3627daf2-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.745177 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32dd3184-895e-4094-8360-5ffe3627daf2-logs\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.745188 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxn2l\" (UniqueName: \"kubernetes.io/projected/32dd3184-895e-4094-8360-5ffe3627daf2-kube-api-access-lxn2l\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.745273 4823 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.754619 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32dd3184-895e-4094-8360-5ffe3627daf2-config-data" (OuterVolumeSpecName: "config-data") pod "32dd3184-895e-4094-8360-5ffe3627daf2" (UID: "32dd3184-895e-4094-8360-5ffe3627daf2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.786826 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/879441cf-44a7-458b-8cfe-1ac422e1f34d-kube-api-access-mnr5x" (OuterVolumeSpecName: "kube-api-access-mnr5x") pod "879441cf-44a7-458b-8cfe-1ac422e1f34d" (UID: "879441cf-44a7-458b-8cfe-1ac422e1f34d"). InnerVolumeSpecName "kube-api-access-mnr5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.820745 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32dd3184-895e-4094-8360-5ffe3627daf2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32dd3184-895e-4094-8360-5ffe3627daf2" (UID: "32dd3184-895e-4094-8360-5ffe3627daf2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.832968 4823 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.849376 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnr5x\" (UniqueName: \"kubernetes.io/projected/879441cf-44a7-458b-8cfe-1ac422e1f34d-kube-api-access-mnr5x\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.849410 4823 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.849421 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32dd3184-895e-4094-8360-5ffe3627daf2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.849430 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32dd3184-895e-4094-8360-5ffe3627daf2-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.873690 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "879441cf-44a7-458b-8cfe-1ac422e1f34d" (UID: "879441cf-44a7-458b-8cfe-1ac422e1f34d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.909021 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "879441cf-44a7-458b-8cfe-1ac422e1f34d" (UID: "879441cf-44a7-458b-8cfe-1ac422e1f34d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.916239 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-config" (OuterVolumeSpecName: "config") pod "879441cf-44a7-458b-8cfe-1ac422e1f34d" (UID: "879441cf-44a7-458b-8cfe-1ac422e1f34d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.938630 4823 scope.go:117] "RemoveContainer" containerID="407264a4c60744014a2edb04d8a9de85229d670b680eeafe9a5c8d053fbe881a" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.953918 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.964910 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.964943 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.980797 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "879441cf-44a7-458b-8cfe-1ac422e1f34d" (UID: "879441cf-44a7-458b-8cfe-1ac422e1f34d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:48:31 crc kubenswrapper[4823]: W1206 06:48:31.985439 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2d768b1_0912_4fb7_8bc8_408233b3af09.slice/crio-4fd91ac47f4a223b5469f322682097f584c22bd7f3c63078e17b813e6380b248 WatchSource:0}: Error finding container 4fd91ac47f4a223b5469f322682097f584c22bd7f3c63078e17b813e6380b248: Status 404 returned error can't find the container with id 4fd91ac47f4a223b5469f322682097f584c22bd7f3c63078e17b813e6380b248 Dec 06 06:48:31 crc kubenswrapper[4823]: I1206 06:48:31.989378 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6mg4v"] Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.018139 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "879441cf-44a7-458b-8cfe-1ac422e1f34d" (UID: "879441cf-44a7-458b-8cfe-1ac422e1f34d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.066568 4823 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.066609 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/879441cf-44a7-458b-8cfe-1ac422e1f34d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.309833 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.328636 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6mg4v" event={"ID":"e2d768b1-0912-4fb7-8bc8-408233b3af09","Type":"ContainerStarted","Data":"4fd91ac47f4a223b5469f322682097f584c22bd7f3c63078e17b813e6380b248"} Dec 06 06:48:32 crc kubenswrapper[4823]: W1206 06:48:32.332903 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod533482ee_6e34_4054_85ed_96df7676e1ab.slice/crio-c59035a9350fe922e0b90ecd3e30e4c7c664bd2ebb43706ea9bdef51214fe7e0 WatchSource:0}: Error finding container c59035a9350fe922e0b90ecd3e30e4c7c664bd2ebb43706ea9bdef51214fe7e0: Status 404 returned error can't find the container with id c59035a9350fe922e0b90ecd3e30e4c7c664bd2ebb43706ea9bdef51214fe7e0 Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.353002 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"32dd3184-895e-4094-8360-5ffe3627daf2","Type":"ContainerDied","Data":"73a38440cfbfbd2b5370c8a0b26c3576f136146fabadd08eec71c5b560e5de36"} Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.353067 4823 scope.go:117] "RemoveContainer" containerID="1bf645f236cfa8259705769e1c6a2fa89a4a14fa0b9e6a858221f8c19648be71" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.353135 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.379159 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"bc03fdf8-c76b-4330-b7fb-58142df075c3","Type":"ContainerStarted","Data":"3315e2679f176cf5cc97104b16151a16fa424e3721a6981c95788a285c461a89"} Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.400647 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e9a933e-7013-4daa-92f7-8809b5f09042","Type":"ContainerStarted","Data":"7c878fd7d3a6d136f55e38be14461e15eb5a5f4cf5a1da5e347240caa7828599"} Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.414676 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6rbl" event={"ID":"06a9a9aa-962e-4cf1-afe9-b831a56f3837","Type":"ContainerStarted","Data":"f3a6d7d985847bd412a6a515b3d066c56093cf5f894e1356004663d635be335f"} Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.421416 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.431962 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" event={"ID":"879441cf-44a7-458b-8cfe-1ac422e1f34d","Type":"ContainerDied","Data":"5bd92eefc17b2fd6aeda7a781ce66d23f52b1e74af42657e2f471bcc07ca2bbb"} Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.432086 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b9dc84c57-c6khp" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.444326 4823 scope.go:117] "RemoveContainer" containerID="279b6e3fbe24ffee338c299383537193605b04424c02847a9769ff210aad98a6" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.448896 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.455610 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.118007299 podStartE2EDuration="39.455579466s" podCreationTimestamp="2025-12-06 06:47:53 +0000 UTC" firstStartedPulling="2025-12-06 06:47:54.980556537 +0000 UTC m=+1376.266308497" lastFinishedPulling="2025-12-06 06:48:31.318128694 +0000 UTC m=+1412.603880664" observedRunningTime="2025-12-06 06:48:32.421005207 +0000 UTC m=+1413.706757167" watchObservedRunningTime="2025-12-06 06:48:32.455579466 +0000 UTC m=+1413.741331436" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.495540 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 06:48:32 crc kubenswrapper[4823]: E1206 06:48:32.496251 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879441cf-44a7-458b-8cfe-1ac422e1f34d" containerName="dnsmasq-dns" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.496268 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="879441cf-44a7-458b-8cfe-1ac422e1f34d" containerName="dnsmasq-dns" Dec 06 06:48:32 crc kubenswrapper[4823]: E1206 06:48:32.496293 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879441cf-44a7-458b-8cfe-1ac422e1f34d" containerName="init" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.496301 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="879441cf-44a7-458b-8cfe-1ac422e1f34d" containerName="init" Dec 06 06:48:32 crc kubenswrapper[4823]: E1206 06:48:32.496321 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32dd3184-895e-4094-8360-5ffe3627daf2" containerName="glance-log" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.496331 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="32dd3184-895e-4094-8360-5ffe3627daf2" containerName="glance-log" Dec 06 06:48:32 crc kubenswrapper[4823]: E1206 06:48:32.496362 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32dd3184-895e-4094-8360-5ffe3627daf2" containerName="glance-httpd" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.496369 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="32dd3184-895e-4094-8360-5ffe3627daf2" containerName="glance-httpd" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.496608 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="879441cf-44a7-458b-8cfe-1ac422e1f34d" containerName="dnsmasq-dns" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.496629 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="32dd3184-895e-4094-8360-5ffe3627daf2" containerName="glance-httpd" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.496680 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="32dd3184-895e-4094-8360-5ffe3627daf2" containerName="glance-log" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.498203 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.508561 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.509023 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.520945 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.530138 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l6rbl" podStartSLOduration=7.393685071 podStartE2EDuration="17.530104809s" podCreationTimestamp="2025-12-06 06:48:15 +0000 UTC" firstStartedPulling="2025-12-06 06:48:21.204737321 +0000 UTC m=+1402.490489281" lastFinishedPulling="2025-12-06 06:48:31.341157059 +0000 UTC m=+1412.626909019" observedRunningTime="2025-12-06 06:48:32.451979242 +0000 UTC m=+1413.737731202" watchObservedRunningTime="2025-12-06 06:48:32.530104809 +0000 UTC m=+1413.815856769" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.546962 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b9dc84c57-c6khp"] Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.567562 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b9dc84c57-c6khp"] Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.589513 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.610809 4823 scope.go:117] "RemoveContainer" containerID="aaffe498cbaa476bc7ccbc4e66f85277575a3a946b142c976544dd6cfabf5f92" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.719555 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.719644 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.719904 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.719987 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-logs\") pod \"glance-default-internal-api-0\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.720111 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj595\" (UniqueName: \"kubernetes.io/projected/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-kube-api-access-zj595\") pod \"glance-default-internal-api-0\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.720148 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.720251 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.720330 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.745466 4823 scope.go:117] "RemoveContainer" containerID="32ff300450287671c32063acc66caf696dc8f09ee67ab7d08c332c326ac60616" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.837958 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.838035 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.838080 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.838105 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.838187 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.838229 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-logs\") pod \"glance-default-internal-api-0\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.838276 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj595\" (UniqueName: \"kubernetes.io/projected/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-kube-api-access-zj595\") pod \"glance-default-internal-api-0\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.838293 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.839210 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.839364 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-logs\") pod \"glance-default-internal-api-0\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.843504 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.850438 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.851652 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.851687 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.863372 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.895256 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj595\" (UniqueName: \"kubernetes.io/projected/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-kube-api-access-zj595\") pod \"glance-default-internal-api-0\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:32 crc kubenswrapper[4823]: I1206 06:48:32.919001 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:48:33 crc kubenswrapper[4823]: I1206 06:48:33.187330 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 06:48:33 crc kubenswrapper[4823]: I1206 06:48:33.256908 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32dd3184-895e-4094-8360-5ffe3627daf2" path="/var/lib/kubelet/pods/32dd3184-895e-4094-8360-5ffe3627daf2/volumes" Dec 06 06:48:33 crc kubenswrapper[4823]: I1206 06:48:33.257920 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="879441cf-44a7-458b-8cfe-1ac422e1f34d" path="/var/lib/kubelet/pods/879441cf-44a7-458b-8cfe-1ac422e1f34d/volumes" Dec 06 06:48:33 crc kubenswrapper[4823]: I1206 06:48:33.468622 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"533482ee-6e34-4054-85ed-96df7676e1ab","Type":"ContainerStarted","Data":"c59035a9350fe922e0b90ecd3e30e4c7c664bd2ebb43706ea9bdef51214fe7e0"} Dec 06 06:48:33 crc kubenswrapper[4823]: I1206 06:48:33.471292 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cd98c1da-8857-4823-8887-4a9d6e405359","Type":"ContainerStarted","Data":"cf684f00ffb32dba550e81822d9f4f668531c2e9bf7311ce5b7357eeff04a6a6"} Dec 06 06:48:34 crc kubenswrapper[4823]: I1206 06:48:34.034514 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:48:34 crc kubenswrapper[4823]: I1206 06:48:34.235406 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 06:48:34 crc kubenswrapper[4823]: W1206 06:48:34.241237 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc924dcca_1855_4731_bc6f_f3ca3bf51e8b.slice/crio-f308caa85bef51791627bda93f4a298a80bc91ba20e930cf9d73e52c8804550e WatchSource:0}: Error finding container f308caa85bef51791627bda93f4a298a80bc91ba20e930cf9d73e52c8804550e: Status 404 returned error can't find the container with id f308caa85bef51791627bda93f4a298a80bc91ba20e930cf9d73e52c8804550e Dec 06 06:48:34 crc kubenswrapper[4823]: I1206 06:48:34.506723 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"533482ee-6e34-4054-85ed-96df7676e1ab","Type":"ContainerStarted","Data":"15b9622770b107f26463043aa29922f25821771d9dffa024ec13c91b538b4dda"} Dec 06 06:48:34 crc kubenswrapper[4823]: I1206 06:48:34.526394 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cd98c1da-8857-4823-8887-4a9d6e405359","Type":"ContainerStarted","Data":"db377e274f887759ccf1f74820affc37db78ed8457db29ce3b3382aa97a95701"} Dec 06 06:48:34 crc kubenswrapper[4823]: I1206 06:48:34.535931 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e9a933e-7013-4daa-92f7-8809b5f09042","Type":"ContainerStarted","Data":"87c91e9d97bdfa29418a2d2a747fedb8ca7c7dff1aeee30200af14630608743f"} Dec 06 06:48:34 crc kubenswrapper[4823]: I1206 06:48:34.539722 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c924dcca-1855-4731-bc6f-f3ca3bf51e8b","Type":"ContainerStarted","Data":"f308caa85bef51791627bda93f4a298a80bc91ba20e930cf9d73e52c8804550e"} Dec 06 06:48:35 crc kubenswrapper[4823]: I1206 06:48:35.573525 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c924dcca-1855-4731-bc6f-f3ca3bf51e8b","Type":"ContainerStarted","Data":"95287a7fb812b9dab7889b0fbae606485a89d976d843278d9dd77a273a03559b"} Dec 06 06:48:36 crc kubenswrapper[4823]: I1206 06:48:36.041329 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l6rbl" Dec 06 06:48:36 crc kubenswrapper[4823]: I1206 06:48:36.041940 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l6rbl" Dec 06 06:48:36 crc kubenswrapper[4823]: I1206 06:48:36.052106 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:48:36 crc kubenswrapper[4823]: I1206 06:48:36.052201 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:48:36 crc kubenswrapper[4823]: I1206 06:48:36.052262 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 06:48:36 crc kubenswrapper[4823]: I1206 06:48:36.053245 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cf0da5e873b0675ce3affbf1aff07940b681c1bb20491ade8083d807561c411f"} pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 06:48:36 crc kubenswrapper[4823]: I1206 06:48:36.053306 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" containerID="cri-o://cf0da5e873b0675ce3affbf1aff07940b681c1bb20491ade8083d807561c411f" gracePeriod=600 Dec 06 06:48:36 crc kubenswrapper[4823]: I1206 06:48:36.596639 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerDied","Data":"cf0da5e873b0675ce3affbf1aff07940b681c1bb20491ade8083d807561c411f"} Dec 06 06:48:36 crc kubenswrapper[4823]: I1206 06:48:36.596997 4823 scope.go:117] "RemoveContainer" containerID="3a9115986422c421655f98d90d9af3c203435cfaca9c79b7b491e0d1286a3843" Dec 06 06:48:36 crc kubenswrapper[4823]: I1206 06:48:36.596591 4823 generic.go:334] "Generic (PLEG): container finished" podID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerID="cf0da5e873b0675ce3affbf1aff07940b681c1bb20491ade8083d807561c411f" exitCode=0 Dec 06 06:48:36 crc kubenswrapper[4823]: I1206 06:48:36.604711 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"533482ee-6e34-4054-85ed-96df7676e1ab","Type":"ContainerStarted","Data":"d3159e31ac30bd95b0aaada4bc35d522b79a4d0e5194a3dcfeda2326fd51fa6d"} Dec 06 06:48:36 crc kubenswrapper[4823]: I1206 06:48:36.604792 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 06 06:48:36 crc kubenswrapper[4823]: I1206 06:48:36.614169 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cd98c1da-8857-4823-8887-4a9d6e405359","Type":"ContainerStarted","Data":"e0cfac497c446cb8d329704803f83ffd2340bf7970632ec7d38ec40736a3e4f1"} Dec 06 06:48:36 crc kubenswrapper[4823]: I1206 06:48:36.620633 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e9a933e-7013-4daa-92f7-8809b5f09042" containerName="ceilometer-central-agent" containerID="cri-o://e3d2501e9105dbe87d709924d8f27c3eb0d31375a234f0ff1763e0c93fd2c197" gracePeriod=30 Dec 06 06:48:36 crc kubenswrapper[4823]: I1206 06:48:36.620881 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e9a933e-7013-4daa-92f7-8809b5f09042" containerName="proxy-httpd" containerID="cri-o://0348ea7a10e5e7ffe83ba553011055f815ad94bbf68c0b27fec1fd7f0dd0fe1e" gracePeriod=30 Dec 06 06:48:36 crc kubenswrapper[4823]: I1206 06:48:36.620956 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e9a933e-7013-4daa-92f7-8809b5f09042" containerName="sg-core" containerID="cri-o://87c91e9d97bdfa29418a2d2a747fedb8ca7c7dff1aeee30200af14630608743f" gracePeriod=30 Dec 06 06:48:36 crc kubenswrapper[4823]: I1206 06:48:36.621010 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e9a933e-7013-4daa-92f7-8809b5f09042" containerName="ceilometer-notification-agent" containerID="cri-o://7c878fd7d3a6d136f55e38be14461e15eb5a5f4cf5a1da5e347240caa7828599" gracePeriod=30 Dec 06 06:48:36 crc kubenswrapper[4823]: I1206 06:48:36.620415 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e9a933e-7013-4daa-92f7-8809b5f09042","Type":"ContainerStarted","Data":"0348ea7a10e5e7ffe83ba553011055f815ad94bbf68c0b27fec1fd7f0dd0fe1e"} Dec 06 06:48:36 crc kubenswrapper[4823]: I1206 06:48:36.621171 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 06:48:36 crc kubenswrapper[4823]: I1206 06:48:36.645273 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.645244828 podStartE2EDuration="6.645244828s" podCreationTimestamp="2025-12-06 06:48:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:48:36.638683549 +0000 UTC m=+1417.924435529" watchObservedRunningTime="2025-12-06 06:48:36.645244828 +0000 UTC m=+1417.930996788" Dec 06 06:48:36 crc kubenswrapper[4823]: I1206 06:48:36.698118 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.509085186 podStartE2EDuration="17.698083255s" podCreationTimestamp="2025-12-06 06:48:19 +0000 UTC" firstStartedPulling="2025-12-06 06:48:22.192878029 +0000 UTC m=+1403.478629989" lastFinishedPulling="2025-12-06 06:48:35.381876098 +0000 UTC m=+1416.667628058" observedRunningTime="2025-12-06 06:48:36.68960222 +0000 UTC m=+1417.975354190" watchObservedRunningTime="2025-12-06 06:48:36.698083255 +0000 UTC m=+1417.983835215" Dec 06 06:48:36 crc kubenswrapper[4823]: I1206 06:48:36.719416 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.7193822 podStartE2EDuration="8.7193822s" podCreationTimestamp="2025-12-06 06:48:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:48:36.710530285 +0000 UTC m=+1417.996282245" watchObservedRunningTime="2025-12-06 06:48:36.7193822 +0000 UTC m=+1418.005134160" Dec 06 06:48:37 crc kubenswrapper[4823]: I1206 06:48:37.121185 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l6rbl" podUID="06a9a9aa-962e-4cf1-afe9-b831a56f3837" containerName="registry-server" probeResult="failure" output=< Dec 06 06:48:37 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Dec 06 06:48:37 crc kubenswrapper[4823]: > Dec 06 06:48:37 crc kubenswrapper[4823]: I1206 06:48:37.637281 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c924dcca-1855-4731-bc6f-f3ca3bf51e8b","Type":"ContainerStarted","Data":"ea16d4202a33a9f813961af832cfb338246a872e432ec85b6319c6f1cd0c63c0"} Dec 06 06:48:37 crc kubenswrapper[4823]: I1206 06:48:37.640173 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerStarted","Data":"129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423"} Dec 06 06:48:37 crc kubenswrapper[4823]: I1206 06:48:37.647302 4823 generic.go:334] "Generic (PLEG): container finished" podID="9e9a933e-7013-4daa-92f7-8809b5f09042" containerID="0348ea7a10e5e7ffe83ba553011055f815ad94bbf68c0b27fec1fd7f0dd0fe1e" exitCode=0 Dec 06 06:48:37 crc kubenswrapper[4823]: I1206 06:48:37.647351 4823 generic.go:334] "Generic (PLEG): container finished" podID="9e9a933e-7013-4daa-92f7-8809b5f09042" containerID="87c91e9d97bdfa29418a2d2a747fedb8ca7c7dff1aeee30200af14630608743f" exitCode=2 Dec 06 06:48:37 crc kubenswrapper[4823]: I1206 06:48:37.647367 4823 generic.go:334] "Generic (PLEG): container finished" podID="9e9a933e-7013-4daa-92f7-8809b5f09042" containerID="7c878fd7d3a6d136f55e38be14461e15eb5a5f4cf5a1da5e347240caa7828599" exitCode=0 Dec 06 06:48:37 crc kubenswrapper[4823]: I1206 06:48:37.647388 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e9a933e-7013-4daa-92f7-8809b5f09042","Type":"ContainerDied","Data":"0348ea7a10e5e7ffe83ba553011055f815ad94bbf68c0b27fec1fd7f0dd0fe1e"} Dec 06 06:48:37 crc kubenswrapper[4823]: I1206 06:48:37.647495 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e9a933e-7013-4daa-92f7-8809b5f09042","Type":"ContainerDied","Data":"87c91e9d97bdfa29418a2d2a747fedb8ca7c7dff1aeee30200af14630608743f"} Dec 06 06:48:37 crc kubenswrapper[4823]: I1206 06:48:37.647508 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e9a933e-7013-4daa-92f7-8809b5f09042","Type":"ContainerDied","Data":"7c878fd7d3a6d136f55e38be14461e15eb5a5f4cf5a1da5e347240caa7828599"} Dec 06 06:48:37 crc kubenswrapper[4823]: I1206 06:48:37.661805 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.661777707 podStartE2EDuration="5.661777707s" podCreationTimestamp="2025-12-06 06:48:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:48:37.657798242 +0000 UTC m=+1418.943550202" watchObservedRunningTime="2025-12-06 06:48:37.661777707 +0000 UTC m=+1418.947529667" Dec 06 06:48:38 crc kubenswrapper[4823]: I1206 06:48:38.753304 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 06 06:48:38 crc kubenswrapper[4823]: I1206 06:48:38.754615 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 06 06:48:38 crc kubenswrapper[4823]: I1206 06:48:38.913956 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 06 06:48:38 crc kubenswrapper[4823]: I1206 06:48:38.914569 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 06 06:48:39 crc kubenswrapper[4823]: I1206 06:48:39.672720 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 06 06:48:39 crc kubenswrapper[4823]: I1206 06:48:39.673112 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 06 06:48:40 crc kubenswrapper[4823]: I1206 06:48:40.669608 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 06:48:40 crc kubenswrapper[4823]: I1206 06:48:40.711388 4823 generic.go:334] "Generic (PLEG): container finished" podID="9e9a933e-7013-4daa-92f7-8809b5f09042" containerID="e3d2501e9105dbe87d709924d8f27c3eb0d31375a234f0ff1763e0c93fd2c197" exitCode=0 Dec 06 06:48:40 crc kubenswrapper[4823]: I1206 06:48:40.712950 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 06:48:40 crc kubenswrapper[4823]: I1206 06:48:40.713577 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e9a933e-7013-4daa-92f7-8809b5f09042","Type":"ContainerDied","Data":"e3d2501e9105dbe87d709924d8f27c3eb0d31375a234f0ff1763e0c93fd2c197"} Dec 06 06:48:40 crc kubenswrapper[4823]: I1206 06:48:40.713617 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e9a933e-7013-4daa-92f7-8809b5f09042","Type":"ContainerDied","Data":"f9652949a4c9a2de904e28dc7b948aaf15789011d650bae27e783d09808b345a"} Dec 06 06:48:40 crc kubenswrapper[4823]: I1206 06:48:40.713638 4823 scope.go:117] "RemoveContainer" containerID="0348ea7a10e5e7ffe83ba553011055f815ad94bbf68c0b27fec1fd7f0dd0fe1e" Dec 06 06:48:40 crc kubenswrapper[4823]: I1206 06:48:40.779001 4823 scope.go:117] "RemoveContainer" containerID="87c91e9d97bdfa29418a2d2a747fedb8ca7c7dff1aeee30200af14630608743f" Dec 06 06:48:40 crc kubenswrapper[4823]: I1206 06:48:40.802219 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9a933e-7013-4daa-92f7-8809b5f09042-combined-ca-bundle\") pod \"9e9a933e-7013-4daa-92f7-8809b5f09042\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " Dec 06 06:48:40 crc kubenswrapper[4823]: I1206 06:48:40.802272 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e9a933e-7013-4daa-92f7-8809b5f09042-scripts\") pod \"9e9a933e-7013-4daa-92f7-8809b5f09042\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " Dec 06 06:48:40 crc kubenswrapper[4823]: I1206 06:48:40.802326 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e9a933e-7013-4daa-92f7-8809b5f09042-log-httpd\") pod \"9e9a933e-7013-4daa-92f7-8809b5f09042\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " Dec 06 06:48:40 crc kubenswrapper[4823]: I1206 06:48:40.802375 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e9a933e-7013-4daa-92f7-8809b5f09042-run-httpd\") pod \"9e9a933e-7013-4daa-92f7-8809b5f09042\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " Dec 06 06:48:40 crc kubenswrapper[4823]: I1206 06:48:40.802446 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9a933e-7013-4daa-92f7-8809b5f09042-config-data\") pod \"9e9a933e-7013-4daa-92f7-8809b5f09042\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " Dec 06 06:48:40 crc kubenswrapper[4823]: I1206 06:48:40.802581 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjhdd\" (UniqueName: \"kubernetes.io/projected/9e9a933e-7013-4daa-92f7-8809b5f09042-kube-api-access-kjhdd\") pod \"9e9a933e-7013-4daa-92f7-8809b5f09042\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " Dec 06 06:48:40 crc kubenswrapper[4823]: I1206 06:48:40.802702 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e9a933e-7013-4daa-92f7-8809b5f09042-sg-core-conf-yaml\") pod \"9e9a933e-7013-4daa-92f7-8809b5f09042\" (UID: \"9e9a933e-7013-4daa-92f7-8809b5f09042\") " Dec 06 06:48:40 crc kubenswrapper[4823]: I1206 06:48:40.806149 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9a933e-7013-4daa-92f7-8809b5f09042-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9e9a933e-7013-4daa-92f7-8809b5f09042" (UID: "9e9a933e-7013-4daa-92f7-8809b5f09042"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:48:40 crc kubenswrapper[4823]: I1206 06:48:40.806789 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9a933e-7013-4daa-92f7-8809b5f09042-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9e9a933e-7013-4daa-92f7-8809b5f09042" (UID: "9e9a933e-7013-4daa-92f7-8809b5f09042"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:48:40 crc kubenswrapper[4823]: I1206 06:48:40.814946 4823 scope.go:117] "RemoveContainer" containerID="7c878fd7d3a6d136f55e38be14461e15eb5a5f4cf5a1da5e347240caa7828599" Dec 06 06:48:40 crc kubenswrapper[4823]: I1206 06:48:40.829559 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9a933e-7013-4daa-92f7-8809b5f09042-scripts" (OuterVolumeSpecName: "scripts") pod "9e9a933e-7013-4daa-92f7-8809b5f09042" (UID: "9e9a933e-7013-4daa-92f7-8809b5f09042"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:40 crc kubenswrapper[4823]: I1206 06:48:40.840912 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9a933e-7013-4daa-92f7-8809b5f09042-kube-api-access-kjhdd" (OuterVolumeSpecName: "kube-api-access-kjhdd") pod "9e9a933e-7013-4daa-92f7-8809b5f09042" (UID: "9e9a933e-7013-4daa-92f7-8809b5f09042"). InnerVolumeSpecName "kube-api-access-kjhdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:48:40 crc kubenswrapper[4823]: I1206 06:48:40.893813 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9a933e-7013-4daa-92f7-8809b5f09042-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9e9a933e-7013-4daa-92f7-8809b5f09042" (UID: "9e9a933e-7013-4daa-92f7-8809b5f09042"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:40 crc kubenswrapper[4823]: I1206 06:48:40.906095 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e9a933e-7013-4daa-92f7-8809b5f09042-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:40 crc kubenswrapper[4823]: I1206 06:48:40.906140 4823 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e9a933e-7013-4daa-92f7-8809b5f09042-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:40 crc kubenswrapper[4823]: I1206 06:48:40.906152 4823 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e9a933e-7013-4daa-92f7-8809b5f09042-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:40 crc kubenswrapper[4823]: I1206 06:48:40.906189 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjhdd\" (UniqueName: \"kubernetes.io/projected/9e9a933e-7013-4daa-92f7-8809b5f09042-kube-api-access-kjhdd\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:40 crc kubenswrapper[4823]: I1206 06:48:40.906205 4823 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e9a933e-7013-4daa-92f7-8809b5f09042-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:40 crc kubenswrapper[4823]: I1206 06:48:40.946410 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9a933e-7013-4daa-92f7-8809b5f09042-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e9a933e-7013-4daa-92f7-8809b5f09042" (UID: "9e9a933e-7013-4daa-92f7-8809b5f09042"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.008722 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9a933e-7013-4daa-92f7-8809b5f09042-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.027859 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9a933e-7013-4daa-92f7-8809b5f09042-config-data" (OuterVolumeSpecName: "config-data") pod "9e9a933e-7013-4daa-92f7-8809b5f09042" (UID: "9e9a933e-7013-4daa-92f7-8809b5f09042"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.111066 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9a933e-7013-4daa-92f7-8809b5f09042-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.138614 4823 scope.go:117] "RemoveContainer" containerID="e3d2501e9105dbe87d709924d8f27c3eb0d31375a234f0ff1763e0c93fd2c197" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.178011 4823 scope.go:117] "RemoveContainer" containerID="0348ea7a10e5e7ffe83ba553011055f815ad94bbf68c0b27fec1fd7f0dd0fe1e" Dec 06 06:48:41 crc kubenswrapper[4823]: E1206 06:48:41.180515 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0348ea7a10e5e7ffe83ba553011055f815ad94bbf68c0b27fec1fd7f0dd0fe1e\": container with ID starting with 0348ea7a10e5e7ffe83ba553011055f815ad94bbf68c0b27fec1fd7f0dd0fe1e not found: ID does not exist" containerID="0348ea7a10e5e7ffe83ba553011055f815ad94bbf68c0b27fec1fd7f0dd0fe1e" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.180567 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0348ea7a10e5e7ffe83ba553011055f815ad94bbf68c0b27fec1fd7f0dd0fe1e"} err="failed to get container status \"0348ea7a10e5e7ffe83ba553011055f815ad94bbf68c0b27fec1fd7f0dd0fe1e\": rpc error: code = NotFound desc = could not find container \"0348ea7a10e5e7ffe83ba553011055f815ad94bbf68c0b27fec1fd7f0dd0fe1e\": container with ID starting with 0348ea7a10e5e7ffe83ba553011055f815ad94bbf68c0b27fec1fd7f0dd0fe1e not found: ID does not exist" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.180602 4823 scope.go:117] "RemoveContainer" containerID="87c91e9d97bdfa29418a2d2a747fedb8ca7c7dff1aeee30200af14630608743f" Dec 06 06:48:41 crc kubenswrapper[4823]: E1206 06:48:41.181432 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87c91e9d97bdfa29418a2d2a747fedb8ca7c7dff1aeee30200af14630608743f\": container with ID starting with 87c91e9d97bdfa29418a2d2a747fedb8ca7c7dff1aeee30200af14630608743f not found: ID does not exist" containerID="87c91e9d97bdfa29418a2d2a747fedb8ca7c7dff1aeee30200af14630608743f" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.181473 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87c91e9d97bdfa29418a2d2a747fedb8ca7c7dff1aeee30200af14630608743f"} err="failed to get container status \"87c91e9d97bdfa29418a2d2a747fedb8ca7c7dff1aeee30200af14630608743f\": rpc error: code = NotFound desc = could not find container \"87c91e9d97bdfa29418a2d2a747fedb8ca7c7dff1aeee30200af14630608743f\": container with ID starting with 87c91e9d97bdfa29418a2d2a747fedb8ca7c7dff1aeee30200af14630608743f not found: ID does not exist" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.181511 4823 scope.go:117] "RemoveContainer" containerID="7c878fd7d3a6d136f55e38be14461e15eb5a5f4cf5a1da5e347240caa7828599" Dec 06 06:48:41 crc kubenswrapper[4823]: E1206 06:48:41.188813 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c878fd7d3a6d136f55e38be14461e15eb5a5f4cf5a1da5e347240caa7828599\": container with ID starting with 7c878fd7d3a6d136f55e38be14461e15eb5a5f4cf5a1da5e347240caa7828599 not found: ID does not exist" containerID="7c878fd7d3a6d136f55e38be14461e15eb5a5f4cf5a1da5e347240caa7828599" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.188888 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c878fd7d3a6d136f55e38be14461e15eb5a5f4cf5a1da5e347240caa7828599"} err="failed to get container status \"7c878fd7d3a6d136f55e38be14461e15eb5a5f4cf5a1da5e347240caa7828599\": rpc error: code = NotFound desc = could not find container \"7c878fd7d3a6d136f55e38be14461e15eb5a5f4cf5a1da5e347240caa7828599\": container with ID starting with 7c878fd7d3a6d136f55e38be14461e15eb5a5f4cf5a1da5e347240caa7828599 not found: ID does not exist" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.188931 4823 scope.go:117] "RemoveContainer" containerID="e3d2501e9105dbe87d709924d8f27c3eb0d31375a234f0ff1763e0c93fd2c197" Dec 06 06:48:41 crc kubenswrapper[4823]: E1206 06:48:41.189630 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3d2501e9105dbe87d709924d8f27c3eb0d31375a234f0ff1763e0c93fd2c197\": container with ID starting with e3d2501e9105dbe87d709924d8f27c3eb0d31375a234f0ff1763e0c93fd2c197 not found: ID does not exist" containerID="e3d2501e9105dbe87d709924d8f27c3eb0d31375a234f0ff1763e0c93fd2c197" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.189703 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3d2501e9105dbe87d709924d8f27c3eb0d31375a234f0ff1763e0c93fd2c197"} err="failed to get container status \"e3d2501e9105dbe87d709924d8f27c3eb0d31375a234f0ff1763e0c93fd2c197\": rpc error: code = NotFound desc = could not find container \"e3d2501e9105dbe87d709924d8f27c3eb0d31375a234f0ff1763e0c93fd2c197\": container with ID starting with e3d2501e9105dbe87d709924d8f27c3eb0d31375a234f0ff1763e0c93fd2c197 not found: ID does not exist" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.348048 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.372764 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.386022 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:48:41 crc kubenswrapper[4823]: E1206 06:48:41.386622 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e9a933e-7013-4daa-92f7-8809b5f09042" containerName="ceilometer-central-agent" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.386644 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e9a933e-7013-4daa-92f7-8809b5f09042" containerName="ceilometer-central-agent" Dec 06 06:48:41 crc kubenswrapper[4823]: E1206 06:48:41.386677 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e9a933e-7013-4daa-92f7-8809b5f09042" containerName="ceilometer-notification-agent" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.386688 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e9a933e-7013-4daa-92f7-8809b5f09042" containerName="ceilometer-notification-agent" Dec 06 06:48:41 crc kubenswrapper[4823]: E1206 06:48:41.386725 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e9a933e-7013-4daa-92f7-8809b5f09042" containerName="proxy-httpd" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.386734 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e9a933e-7013-4daa-92f7-8809b5f09042" containerName="proxy-httpd" Dec 06 06:48:41 crc kubenswrapper[4823]: E1206 06:48:41.386753 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e9a933e-7013-4daa-92f7-8809b5f09042" containerName="sg-core" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.386761 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e9a933e-7013-4daa-92f7-8809b5f09042" containerName="sg-core" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.386994 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e9a933e-7013-4daa-92f7-8809b5f09042" containerName="proxy-httpd" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.387017 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e9a933e-7013-4daa-92f7-8809b5f09042" containerName="ceilometer-notification-agent" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.387035 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e9a933e-7013-4daa-92f7-8809b5f09042" containerName="sg-core" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.387057 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e9a933e-7013-4daa-92f7-8809b5f09042" containerName="ceilometer-central-agent" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.389574 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.393059 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.393302 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.395910 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.518907 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-config-data\") pod \"ceilometer-0\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " pod="openstack/ceilometer-0" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.518958 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " pod="openstack/ceilometer-0" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.518984 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-run-httpd\") pod \"ceilometer-0\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " pod="openstack/ceilometer-0" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.519152 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-log-httpd\") pod \"ceilometer-0\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " pod="openstack/ceilometer-0" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.519446 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgs8q\" (UniqueName: \"kubernetes.io/projected/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-kube-api-access-fgs8q\") pod \"ceilometer-0\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " pod="openstack/ceilometer-0" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.519848 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " pod="openstack/ceilometer-0" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.519918 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-scripts\") pod \"ceilometer-0\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " pod="openstack/ceilometer-0" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.621578 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgs8q\" (UniqueName: \"kubernetes.io/projected/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-kube-api-access-fgs8q\") pod \"ceilometer-0\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " pod="openstack/ceilometer-0" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.621673 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " pod="openstack/ceilometer-0" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.621715 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-scripts\") pod \"ceilometer-0\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " pod="openstack/ceilometer-0" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.621804 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-config-data\") pod \"ceilometer-0\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " pod="openstack/ceilometer-0" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.621839 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " pod="openstack/ceilometer-0" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.621867 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-run-httpd\") pod \"ceilometer-0\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " pod="openstack/ceilometer-0" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.621920 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-log-httpd\") pod \"ceilometer-0\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " pod="openstack/ceilometer-0" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.622535 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-log-httpd\") pod \"ceilometer-0\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " pod="openstack/ceilometer-0" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.622714 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-run-httpd\") pod \"ceilometer-0\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " pod="openstack/ceilometer-0" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.631391 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-config-data\") pod \"ceilometer-0\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " pod="openstack/ceilometer-0" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.631499 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-scripts\") pod \"ceilometer-0\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " pod="openstack/ceilometer-0" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.636559 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " pod="openstack/ceilometer-0" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.640904 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " pod="openstack/ceilometer-0" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.643978 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgs8q\" (UniqueName: \"kubernetes.io/projected/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-kube-api-access-fgs8q\") pod \"ceilometer-0\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " pod="openstack/ceilometer-0" Dec 06 06:48:41 crc kubenswrapper[4823]: I1206 06:48:41.727356 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 06:48:43 crc kubenswrapper[4823]: I1206 06:48:43.157393 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9a933e-7013-4daa-92f7-8809b5f09042" path="/var/lib/kubelet/pods/9e9a933e-7013-4daa-92f7-8809b5f09042/volumes" Dec 06 06:48:43 crc kubenswrapper[4823]: I1206 06:48:43.188627 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 06 06:48:43 crc kubenswrapper[4823]: I1206 06:48:43.188736 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 06 06:48:43 crc kubenswrapper[4823]: I1206 06:48:43.246233 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 06 06:48:43 crc kubenswrapper[4823]: I1206 06:48:43.255086 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 06 06:48:43 crc kubenswrapper[4823]: I1206 06:48:43.773500 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 06 06:48:43 crc kubenswrapper[4823]: I1206 06:48:43.773722 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 06 06:48:44 crc kubenswrapper[4823]: I1206 06:48:44.141299 4823 scope.go:117] "RemoveContainer" containerID="0361cfd70d4afe6b8321a2257452d1edf130b9372a99ab20db5c162575da66fd" Dec 06 06:48:44 crc kubenswrapper[4823]: E1206 06:48:44.141592 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24)\"" pod="openstack/watcher-decision-engine-0" podUID="738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" Dec 06 06:48:44 crc kubenswrapper[4823]: I1206 06:48:44.655448 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 06 06:48:44 crc kubenswrapper[4823]: I1206 06:48:44.664756 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 06 06:48:45 crc kubenswrapper[4823]: I1206 06:48:45.156735 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 06 06:48:45 crc kubenswrapper[4823]: I1206 06:48:45.808522 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 06:48:45 crc kubenswrapper[4823]: I1206 06:48:45.808867 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 06:48:47 crc kubenswrapper[4823]: I1206 06:48:47.113380 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l6rbl" podUID="06a9a9aa-962e-4cf1-afe9-b831a56f3837" containerName="registry-server" probeResult="failure" output=< Dec 06 06:48:47 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Dec 06 06:48:47 crc kubenswrapper[4823]: > Dec 06 06:48:47 crc kubenswrapper[4823]: I1206 06:48:47.307935 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-dc699456d-7slk7" Dec 06 06:48:47 crc kubenswrapper[4823]: I1206 06:48:47.328746 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 06 06:48:47 crc kubenswrapper[4823]: I1206 06:48:47.328868 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 06:48:47 crc kubenswrapper[4823]: I1206 06:48:47.476973 4823 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod2bcc21a4-6b09-4804-86d5-85cc7f0267e7"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod2bcc21a4-6b09-4804-86d5-85cc7f0267e7] : Timed out while waiting for systemd to remove kubepods-besteffort-pod2bcc21a4_6b09_4804_86d5_85cc7f0267e7.slice" Dec 06 06:48:47 crc kubenswrapper[4823]: E1206 06:48:47.477042 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod2bcc21a4-6b09-4804-86d5-85cc7f0267e7] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod2bcc21a4-6b09-4804-86d5-85cc7f0267e7] : Timed out while waiting for systemd to remove kubepods-besteffort-pod2bcc21a4_6b09_4804_86d5_85cc7f0267e7.slice" pod="openstack/horizon-cdc5bf4b4-qft5r" podUID="2bcc21a4-6b09-4804-86d5-85cc7f0267e7" Dec 06 06:48:47 crc kubenswrapper[4823]: I1206 06:48:47.843908 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cdc5bf4b4-qft5r" Dec 06 06:48:47 crc kubenswrapper[4823]: I1206 06:48:47.906590 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cdc5bf4b4-qft5r"] Dec 06 06:48:47 crc kubenswrapper[4823]: I1206 06:48:47.922846 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-cdc5bf4b4-qft5r"] Dec 06 06:48:48 crc kubenswrapper[4823]: I1206 06:48:48.260737 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 06 06:48:49 crc kubenswrapper[4823]: I1206 06:48:49.156676 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bcc21a4-6b09-4804-86d5-85cc7f0267e7" path="/var/lib/kubelet/pods/2bcc21a4-6b09-4804-86d5-85cc7f0267e7/volumes" Dec 06 06:48:51 crc kubenswrapper[4823]: I1206 06:48:51.339804 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 06:48:51 crc kubenswrapper[4823]: I1206 06:48:51.340532 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="cd98c1da-8857-4823-8887-4a9d6e405359" containerName="glance-log" containerID="cri-o://db377e274f887759ccf1f74820affc37db78ed8457db29ce3b3382aa97a95701" gracePeriod=30 Dec 06 06:48:51 crc kubenswrapper[4823]: I1206 06:48:51.340611 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="cd98c1da-8857-4823-8887-4a9d6e405359" containerName="glance-httpd" containerID="cri-o://e0cfac497c446cb8d329704803f83ffd2340bf7970632ec7d38ec40736a3e4f1" gracePeriod=30 Dec 06 06:48:51 crc kubenswrapper[4823]: I1206 06:48:51.903825 4823 generic.go:334] "Generic (PLEG): container finished" podID="cd98c1da-8857-4823-8887-4a9d6e405359" containerID="db377e274f887759ccf1f74820affc37db78ed8457db29ce3b3382aa97a95701" exitCode=143 Dec 06 06:48:51 crc kubenswrapper[4823]: I1206 06:48:51.903921 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cd98c1da-8857-4823-8887-4a9d6e405359","Type":"ContainerDied","Data":"db377e274f887759ccf1f74820affc37db78ed8457db29ce3b3382aa97a95701"} Dec 06 06:48:52 crc kubenswrapper[4823]: E1206 06:48:52.037770 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-nova-conductor:watcher_latest" Dec 06 06:48:52 crc kubenswrapper[4823]: E1206 06:48:52.037859 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.174:5001/podified-master-centos10/openstack-nova-conductor:watcher_latest" Dec 06 06:48:52 crc kubenswrapper[4823]: E1206 06:48:52.038028 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:38.102.83.174:5001/podified-master-centos10/openstack-nova-conductor:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v46kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-6mg4v_openstack(e2d768b1-0912-4fb7-8bc8-408233b3af09): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:48:52 crc kubenswrapper[4823]: E1206 06:48:52.039269 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-6mg4v" podUID="e2d768b1-0912-4fb7-8bc8-408233b3af09" Dec 06 06:48:52 crc kubenswrapper[4823]: I1206 06:48:52.221811 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:48:52 crc kubenswrapper[4823]: I1206 06:48:52.531785 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:48:52 crc kubenswrapper[4823]: I1206 06:48:52.948277 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b","Type":"ContainerStarted","Data":"e830bbfb7aa76dda11746510ecb1364dd097796b79d2e436454caa7075bbbb44"} Dec 06 06:48:52 crc kubenswrapper[4823]: I1206 06:48:52.948597 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b","Type":"ContainerStarted","Data":"788b7509bba241891657730f5116a9fd51e71e2b0a98dffe587ab12b2896f1e5"} Dec 06 06:48:52 crc kubenswrapper[4823]: E1206 06:48:52.953508 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.174:5001/podified-master-centos10/openstack-nova-conductor:watcher_latest\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-6mg4v" podUID="e2d768b1-0912-4fb7-8bc8-408233b3af09" Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.587275 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.660494 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cd98c1da-8857-4823-8887-4a9d6e405359-httpd-run\") pod \"cd98c1da-8857-4823-8887-4a9d6e405359\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.660992 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd98c1da-8857-4823-8887-4a9d6e405359-scripts\") pod \"cd98c1da-8857-4823-8887-4a9d6e405359\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.661032 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd98c1da-8857-4823-8887-4a9d6e405359-public-tls-certs\") pod \"cd98c1da-8857-4823-8887-4a9d6e405359\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.661074 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd98c1da-8857-4823-8887-4a9d6e405359-config-data\") pod \"cd98c1da-8857-4823-8887-4a9d6e405359\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.661179 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd98c1da-8857-4823-8887-4a9d6e405359-logs\") pod \"cd98c1da-8857-4823-8887-4a9d6e405359\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.661260 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd98c1da-8857-4823-8887-4a9d6e405359-combined-ca-bundle\") pod \"cd98c1da-8857-4823-8887-4a9d6e405359\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.661346 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cd98c1da-8857-4823-8887-4a9d6e405359\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.661379 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q7pk\" (UniqueName: \"kubernetes.io/projected/cd98c1da-8857-4823-8887-4a9d6e405359-kube-api-access-2q7pk\") pod \"cd98c1da-8857-4823-8887-4a9d6e405359\" (UID: \"cd98c1da-8857-4823-8887-4a9d6e405359\") " Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.665156 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd98c1da-8857-4823-8887-4a9d6e405359-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cd98c1da-8857-4823-8887-4a9d6e405359" (UID: "cd98c1da-8857-4823-8887-4a9d6e405359"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.665168 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd98c1da-8857-4823-8887-4a9d6e405359-logs" (OuterVolumeSpecName: "logs") pod "cd98c1da-8857-4823-8887-4a9d6e405359" (UID: "cd98c1da-8857-4823-8887-4a9d6e405359"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.674256 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd98c1da-8857-4823-8887-4a9d6e405359-scripts" (OuterVolumeSpecName: "scripts") pod "cd98c1da-8857-4823-8887-4a9d6e405359" (UID: "cd98c1da-8857-4823-8887-4a9d6e405359"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.674362 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "cd98c1da-8857-4823-8887-4a9d6e405359" (UID: "cd98c1da-8857-4823-8887-4a9d6e405359"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.674475 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd98c1da-8857-4823-8887-4a9d6e405359-kube-api-access-2q7pk" (OuterVolumeSpecName: "kube-api-access-2q7pk") pod "cd98c1da-8857-4823-8887-4a9d6e405359" (UID: "cd98c1da-8857-4823-8887-4a9d6e405359"). InnerVolumeSpecName "kube-api-access-2q7pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.703956 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd98c1da-8857-4823-8887-4a9d6e405359-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd98c1da-8857-4823-8887-4a9d6e405359" (UID: "cd98c1da-8857-4823-8887-4a9d6e405359"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.742366 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd98c1da-8857-4823-8887-4a9d6e405359-config-data" (OuterVolumeSpecName: "config-data") pod "cd98c1da-8857-4823-8887-4a9d6e405359" (UID: "cd98c1da-8857-4823-8887-4a9d6e405359"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.752612 4823 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podd3b29187-c6a2-4b88-9215-759fe3cb8dad"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podd3b29187-c6a2-4b88-9215-759fe3cb8dad] : Timed out while waiting for systemd to remove kubepods-besteffort-podd3b29187_c6a2_4b88_9215_759fe3cb8dad.slice" Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.759148 4823 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod199828c4-e1bd-42a8-b35c-ba26f4c980b8"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod199828c4-e1bd-42a8-b35c-ba26f4c980b8] : Timed out while waiting for systemd to remove kubepods-besteffort-pod199828c4_e1bd_42a8_b35c_ba26f4c980b8.slice" Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.771733 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd98c1da-8857-4823-8887-4a9d6e405359-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.771803 4823 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.771817 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2q7pk\" (UniqueName: \"kubernetes.io/projected/cd98c1da-8857-4823-8887-4a9d6e405359-kube-api-access-2q7pk\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.771832 4823 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cd98c1da-8857-4823-8887-4a9d6e405359-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.771845 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd98c1da-8857-4823-8887-4a9d6e405359-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.771856 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd98c1da-8857-4823-8887-4a9d6e405359-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.771868 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd98c1da-8857-4823-8887-4a9d6e405359-logs\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.777691 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6649b9bbc9-h8s24" Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.786113 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd98c1da-8857-4823-8887-4a9d6e405359-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cd98c1da-8857-4823-8887-4a9d6e405359" (UID: "cd98c1da-8857-4823-8887-4a9d6e405359"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.867791 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-dc699456d-7slk7"] Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.868178 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-dc699456d-7slk7" podUID="15372216-fc04-44d8-8268-7dbf3b74eeb7" containerName="neutron-api" containerID="cri-o://6a72b8947a46f7bf870e3d45ac3ddad3f60a11982084a374c762d6001905106a" gracePeriod=30 Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.868232 4823 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.868365 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-dc699456d-7slk7" podUID="15372216-fc04-44d8-8268-7dbf3b74eeb7" containerName="neutron-httpd" containerID="cri-o://53ee6989e7fbf101b0ddde565302bc65b9407ab427347fe0d1879284612869bf" gracePeriod=30 Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.874324 4823 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd98c1da-8857-4823-8887-4a9d6e405359-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.874361 4823 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.969423 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b","Type":"ContainerStarted","Data":"06b7d73e5e4119a6c364dd354a79ccdb3500a1f5844045316fd41c5dbdfbc461"} Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.977911 4823 generic.go:334] "Generic (PLEG): container finished" podID="cd98c1da-8857-4823-8887-4a9d6e405359" containerID="e0cfac497c446cb8d329704803f83ffd2340bf7970632ec7d38ec40736a3e4f1" exitCode=0 Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.977974 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cd98c1da-8857-4823-8887-4a9d6e405359","Type":"ContainerDied","Data":"e0cfac497c446cb8d329704803f83ffd2340bf7970632ec7d38ec40736a3e4f1"} Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.978015 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cd98c1da-8857-4823-8887-4a9d6e405359","Type":"ContainerDied","Data":"cf684f00ffb32dba550e81822d9f4f668531c2e9bf7311ce5b7357eeff04a6a6"} Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.978040 4823 scope.go:117] "RemoveContainer" containerID="e0cfac497c446cb8d329704803f83ffd2340bf7970632ec7d38ec40736a3e4f1" Dec 06 06:48:53 crc kubenswrapper[4823]: I1206 06:48:53.978049 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.034634 4823 scope.go:117] "RemoveContainer" containerID="db377e274f887759ccf1f74820affc37db78ed8457db29ce3b3382aa97a95701" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.064522 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.105789 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.126216 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 06:48:54 crc kubenswrapper[4823]: E1206 06:48:54.126873 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd98c1da-8857-4823-8887-4a9d6e405359" containerName="glance-httpd" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.126899 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd98c1da-8857-4823-8887-4a9d6e405359" containerName="glance-httpd" Dec 06 06:48:54 crc kubenswrapper[4823]: E1206 06:48:54.126918 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd98c1da-8857-4823-8887-4a9d6e405359" containerName="glance-log" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.126931 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd98c1da-8857-4823-8887-4a9d6e405359" containerName="glance-log" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.127203 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd98c1da-8857-4823-8887-4a9d6e405359" containerName="glance-log" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.127221 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd98c1da-8857-4823-8887-4a9d6e405359" containerName="glance-httpd" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.128634 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.136198 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.142819 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.143216 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.185168 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/047c6c9f-696a-47a0-9adb-3dca69a83eea-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"047c6c9f-696a-47a0-9adb-3dca69a83eea\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.185251 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/047c6c9f-696a-47a0-9adb-3dca69a83eea-config-data\") pod \"glance-default-external-api-0\" (UID: \"047c6c9f-696a-47a0-9adb-3dca69a83eea\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.185356 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/047c6c9f-696a-47a0-9adb-3dca69a83eea-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"047c6c9f-696a-47a0-9adb-3dca69a83eea\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.185449 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/047c6c9f-696a-47a0-9adb-3dca69a83eea-logs\") pod \"glance-default-external-api-0\" (UID: \"047c6c9f-696a-47a0-9adb-3dca69a83eea\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.185544 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"047c6c9f-696a-47a0-9adb-3dca69a83eea\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.185622 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg4s2\" (UniqueName: \"kubernetes.io/projected/047c6c9f-696a-47a0-9adb-3dca69a83eea-kube-api-access-bg4s2\") pod \"glance-default-external-api-0\" (UID: \"047c6c9f-696a-47a0-9adb-3dca69a83eea\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.185672 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/047c6c9f-696a-47a0-9adb-3dca69a83eea-scripts\") pod \"glance-default-external-api-0\" (UID: \"047c6c9f-696a-47a0-9adb-3dca69a83eea\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.185693 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/047c6c9f-696a-47a0-9adb-3dca69a83eea-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"047c6c9f-696a-47a0-9adb-3dca69a83eea\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.206237 4823 scope.go:117] "RemoveContainer" containerID="e0cfac497c446cb8d329704803f83ffd2340bf7970632ec7d38ec40736a3e4f1" Dec 06 06:48:54 crc kubenswrapper[4823]: E1206 06:48:54.207334 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0cfac497c446cb8d329704803f83ffd2340bf7970632ec7d38ec40736a3e4f1\": container with ID starting with e0cfac497c446cb8d329704803f83ffd2340bf7970632ec7d38ec40736a3e4f1 not found: ID does not exist" containerID="e0cfac497c446cb8d329704803f83ffd2340bf7970632ec7d38ec40736a3e4f1" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.207389 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0cfac497c446cb8d329704803f83ffd2340bf7970632ec7d38ec40736a3e4f1"} err="failed to get container status \"e0cfac497c446cb8d329704803f83ffd2340bf7970632ec7d38ec40736a3e4f1\": rpc error: code = NotFound desc = could not find container \"e0cfac497c446cb8d329704803f83ffd2340bf7970632ec7d38ec40736a3e4f1\": container with ID starting with e0cfac497c446cb8d329704803f83ffd2340bf7970632ec7d38ec40736a3e4f1 not found: ID does not exist" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.207427 4823 scope.go:117] "RemoveContainer" containerID="db377e274f887759ccf1f74820affc37db78ed8457db29ce3b3382aa97a95701" Dec 06 06:48:54 crc kubenswrapper[4823]: E1206 06:48:54.210732 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db377e274f887759ccf1f74820affc37db78ed8457db29ce3b3382aa97a95701\": container with ID starting with db377e274f887759ccf1f74820affc37db78ed8457db29ce3b3382aa97a95701 not found: ID does not exist" containerID="db377e274f887759ccf1f74820affc37db78ed8457db29ce3b3382aa97a95701" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.210808 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db377e274f887759ccf1f74820affc37db78ed8457db29ce3b3382aa97a95701"} err="failed to get container status \"db377e274f887759ccf1f74820affc37db78ed8457db29ce3b3382aa97a95701\": rpc error: code = NotFound desc = could not find container \"db377e274f887759ccf1f74820affc37db78ed8457db29ce3b3382aa97a95701\": container with ID starting with db377e274f887759ccf1f74820affc37db78ed8457db29ce3b3382aa97a95701 not found: ID does not exist" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.288773 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg4s2\" (UniqueName: \"kubernetes.io/projected/047c6c9f-696a-47a0-9adb-3dca69a83eea-kube-api-access-bg4s2\") pod \"glance-default-external-api-0\" (UID: \"047c6c9f-696a-47a0-9adb-3dca69a83eea\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.288855 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/047c6c9f-696a-47a0-9adb-3dca69a83eea-scripts\") pod \"glance-default-external-api-0\" (UID: \"047c6c9f-696a-47a0-9adb-3dca69a83eea\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.288887 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/047c6c9f-696a-47a0-9adb-3dca69a83eea-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"047c6c9f-696a-47a0-9adb-3dca69a83eea\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.288962 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/047c6c9f-696a-47a0-9adb-3dca69a83eea-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"047c6c9f-696a-47a0-9adb-3dca69a83eea\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.289002 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/047c6c9f-696a-47a0-9adb-3dca69a83eea-config-data\") pod \"glance-default-external-api-0\" (UID: \"047c6c9f-696a-47a0-9adb-3dca69a83eea\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.289047 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/047c6c9f-696a-47a0-9adb-3dca69a83eea-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"047c6c9f-696a-47a0-9adb-3dca69a83eea\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.289105 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/047c6c9f-696a-47a0-9adb-3dca69a83eea-logs\") pod \"glance-default-external-api-0\" (UID: \"047c6c9f-696a-47a0-9adb-3dca69a83eea\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.289192 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"047c6c9f-696a-47a0-9adb-3dca69a83eea\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.289910 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"047c6c9f-696a-47a0-9adb-3dca69a83eea\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.290825 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/047c6c9f-696a-47a0-9adb-3dca69a83eea-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"047c6c9f-696a-47a0-9adb-3dca69a83eea\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.291117 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/047c6c9f-696a-47a0-9adb-3dca69a83eea-logs\") pod \"glance-default-external-api-0\" (UID: \"047c6c9f-696a-47a0-9adb-3dca69a83eea\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.298772 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/047c6c9f-696a-47a0-9adb-3dca69a83eea-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"047c6c9f-696a-47a0-9adb-3dca69a83eea\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.302006 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/047c6c9f-696a-47a0-9adb-3dca69a83eea-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"047c6c9f-696a-47a0-9adb-3dca69a83eea\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.303299 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/047c6c9f-696a-47a0-9adb-3dca69a83eea-scripts\") pod \"glance-default-external-api-0\" (UID: \"047c6c9f-696a-47a0-9adb-3dca69a83eea\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.323315 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/047c6c9f-696a-47a0-9adb-3dca69a83eea-config-data\") pod \"glance-default-external-api-0\" (UID: \"047c6c9f-696a-47a0-9adb-3dca69a83eea\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.328685 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg4s2\" (UniqueName: \"kubernetes.io/projected/047c6c9f-696a-47a0-9adb-3dca69a83eea-kube-api-access-bg4s2\") pod \"glance-default-external-api-0\" (UID: \"047c6c9f-696a-47a0-9adb-3dca69a83eea\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.361383 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"047c6c9f-696a-47a0-9adb-3dca69a83eea\") " pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.535555 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.994184 4823 generic.go:334] "Generic (PLEG): container finished" podID="15372216-fc04-44d8-8268-7dbf3b74eeb7" containerID="53ee6989e7fbf101b0ddde565302bc65b9407ab427347fe0d1879284612869bf" exitCode=0 Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.994354 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc699456d-7slk7" event={"ID":"15372216-fc04-44d8-8268-7dbf3b74eeb7","Type":"ContainerDied","Data":"53ee6989e7fbf101b0ddde565302bc65b9407ab427347fe0d1879284612869bf"} Dec 06 06:48:54 crc kubenswrapper[4823]: I1206 06:48:54.998921 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b","Type":"ContainerStarted","Data":"8d8a29a40945af76f425c575651edf8841f00886c20d76a18950c7014345d1a3"} Dec 06 06:48:55 crc kubenswrapper[4823]: I1206 06:48:55.156159 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd98c1da-8857-4823-8887-4a9d6e405359" path="/var/lib/kubelet/pods/cd98c1da-8857-4823-8887-4a9d6e405359/volumes" Dec 06 06:48:55 crc kubenswrapper[4823]: W1206 06:48:55.243236 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod047c6c9f_696a_47a0_9adb_3dca69a83eea.slice/crio-7c25c8f984074a6fe56f64a687771f9ea7f958619ea947c3ab3a3f93555e4a22 WatchSource:0}: Error finding container 7c25c8f984074a6fe56f64a687771f9ea7f958619ea947c3ab3a3f93555e4a22: Status 404 returned error can't find the container with id 7c25c8f984074a6fe56f64a687771f9ea7f958619ea947c3ab3a3f93555e4a22 Dec 06 06:48:55 crc kubenswrapper[4823]: I1206 06:48:55.257209 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.014051 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"047c6c9f-696a-47a0-9adb-3dca69a83eea","Type":"ContainerStarted","Data":"7c25c8f984074a6fe56f64a687771f9ea7f958619ea947c3ab3a3f93555e4a22"} Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.019446 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc699456d-7slk7" event={"ID":"15372216-fc04-44d8-8268-7dbf3b74eeb7","Type":"ContainerDied","Data":"6a72b8947a46f7bf870e3d45ac3ddad3f60a11982084a374c762d6001905106a"} Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.019377 4823 generic.go:334] "Generic (PLEG): container finished" podID="15372216-fc04-44d8-8268-7dbf3b74eeb7" containerID="6a72b8947a46f7bf870e3d45ac3ddad3f60a11982084a374c762d6001905106a" exitCode=0 Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.023365 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b","Type":"ContainerStarted","Data":"b6cf2d8b8d0ce430133d626cc615fbce83ce3cd134f318d1fdb66d54f22531d4"} Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.023694 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" containerName="ceilometer-central-agent" containerID="cri-o://e830bbfb7aa76dda11746510ecb1364dd097796b79d2e436454caa7075bbbb44" gracePeriod=30 Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.024220 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.024417 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" containerName="proxy-httpd" containerID="cri-o://b6cf2d8b8d0ce430133d626cc615fbce83ce3cd134f318d1fdb66d54f22531d4" gracePeriod=30 Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.024647 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" containerName="ceilometer-notification-agent" containerID="cri-o://06b7d73e5e4119a6c364dd354a79ccdb3500a1f5844045316fd41c5dbdfbc461" gracePeriod=30 Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.024769 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" containerName="sg-core" containerID="cri-o://8d8a29a40945af76f425c575651edf8841f00886c20d76a18950c7014345d1a3" gracePeriod=30 Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.142334 4823 scope.go:117] "RemoveContainer" containerID="0361cfd70d4afe6b8321a2257452d1edf130b9372a99ab20db5c162575da66fd" Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.386287 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dc699456d-7slk7" Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.428132 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=12.311375005 podStartE2EDuration="15.42809766s" podCreationTimestamp="2025-12-06 06:48:41 +0000 UTC" firstStartedPulling="2025-12-06 06:48:52.539389341 +0000 UTC m=+1433.825141301" lastFinishedPulling="2025-12-06 06:48:55.656111996 +0000 UTC m=+1436.941863956" observedRunningTime="2025-12-06 06:48:56.052029655 +0000 UTC m=+1437.337781615" watchObservedRunningTime="2025-12-06 06:48:56.42809766 +0000 UTC m=+1437.713849620" Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.552537 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15372216-fc04-44d8-8268-7dbf3b74eeb7-combined-ca-bundle\") pod \"15372216-fc04-44d8-8268-7dbf3b74eeb7\" (UID: \"15372216-fc04-44d8-8268-7dbf3b74eeb7\") " Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.553308 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m48ts\" (UniqueName: \"kubernetes.io/projected/15372216-fc04-44d8-8268-7dbf3b74eeb7-kube-api-access-m48ts\") pod \"15372216-fc04-44d8-8268-7dbf3b74eeb7\" (UID: \"15372216-fc04-44d8-8268-7dbf3b74eeb7\") " Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.553399 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/15372216-fc04-44d8-8268-7dbf3b74eeb7-config\") pod \"15372216-fc04-44d8-8268-7dbf3b74eeb7\" (UID: \"15372216-fc04-44d8-8268-7dbf3b74eeb7\") " Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.553522 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/15372216-fc04-44d8-8268-7dbf3b74eeb7-httpd-config\") pod \"15372216-fc04-44d8-8268-7dbf3b74eeb7\" (UID: \"15372216-fc04-44d8-8268-7dbf3b74eeb7\") " Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.553593 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/15372216-fc04-44d8-8268-7dbf3b74eeb7-ovndb-tls-certs\") pod \"15372216-fc04-44d8-8268-7dbf3b74eeb7\" (UID: \"15372216-fc04-44d8-8268-7dbf3b74eeb7\") " Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.562013 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15372216-fc04-44d8-8268-7dbf3b74eeb7-kube-api-access-m48ts" (OuterVolumeSpecName: "kube-api-access-m48ts") pod "15372216-fc04-44d8-8268-7dbf3b74eeb7" (UID: "15372216-fc04-44d8-8268-7dbf3b74eeb7"). InnerVolumeSpecName "kube-api-access-m48ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.602128 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15372216-fc04-44d8-8268-7dbf3b74eeb7-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "15372216-fc04-44d8-8268-7dbf3b74eeb7" (UID: "15372216-fc04-44d8-8268-7dbf3b74eeb7"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.631704 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15372216-fc04-44d8-8268-7dbf3b74eeb7-config" (OuterVolumeSpecName: "config") pod "15372216-fc04-44d8-8268-7dbf3b74eeb7" (UID: "15372216-fc04-44d8-8268-7dbf3b74eeb7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.649321 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15372216-fc04-44d8-8268-7dbf3b74eeb7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15372216-fc04-44d8-8268-7dbf3b74eeb7" (UID: "15372216-fc04-44d8-8268-7dbf3b74eeb7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.660521 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m48ts\" (UniqueName: \"kubernetes.io/projected/15372216-fc04-44d8-8268-7dbf3b74eeb7-kube-api-access-m48ts\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.660573 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/15372216-fc04-44d8-8268-7dbf3b74eeb7-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.660584 4823 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/15372216-fc04-44d8-8268-7dbf3b74eeb7-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.660597 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15372216-fc04-44d8-8268-7dbf3b74eeb7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.697775 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15372216-fc04-44d8-8268-7dbf3b74eeb7-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "15372216-fc04-44d8-8268-7dbf3b74eeb7" (UID: "15372216-fc04-44d8-8268-7dbf3b74eeb7"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:48:56 crc kubenswrapper[4823]: I1206 06:48:56.765894 4823 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/15372216-fc04-44d8-8268-7dbf3b74eeb7-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 06:48:57 crc kubenswrapper[4823]: I1206 06:48:57.041457 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"047c6c9f-696a-47a0-9adb-3dca69a83eea","Type":"ContainerStarted","Data":"59a18903261dbb81e32e1b7308c5c09031d49bccac9f4951ca66f3e0d10eefff"} Dec 06 06:48:57 crc kubenswrapper[4823]: I1206 06:48:57.041540 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"047c6c9f-696a-47a0-9adb-3dca69a83eea","Type":"ContainerStarted","Data":"f668b2c0b6de9aca4969d6c18cbdc95c1950ff2405b20a0927d191850f97713d"} Dec 06 06:48:57 crc kubenswrapper[4823]: I1206 06:48:57.045117 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc699456d-7slk7" event={"ID":"15372216-fc04-44d8-8268-7dbf3b74eeb7","Type":"ContainerDied","Data":"0998216dcecd0fc8e85aaa8950b43e8ede75f83dcbb6fcdce0a67c9b2c227824"} Dec 06 06:48:57 crc kubenswrapper[4823]: I1206 06:48:57.045156 4823 scope.go:117] "RemoveContainer" containerID="53ee6989e7fbf101b0ddde565302bc65b9407ab427347fe0d1879284612869bf" Dec 06 06:48:57 crc kubenswrapper[4823]: I1206 06:48:57.045251 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dc699456d-7slk7" Dec 06 06:48:57 crc kubenswrapper[4823]: I1206 06:48:57.064917 4823 generic.go:334] "Generic (PLEG): container finished" podID="a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" containerID="8d8a29a40945af76f425c575651edf8841f00886c20d76a18950c7014345d1a3" exitCode=2 Dec 06 06:48:57 crc kubenswrapper[4823]: I1206 06:48:57.065011 4823 generic.go:334] "Generic (PLEG): container finished" podID="a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" containerID="06b7d73e5e4119a6c364dd354a79ccdb3500a1f5844045316fd41c5dbdfbc461" exitCode=0 Dec 06 06:48:57 crc kubenswrapper[4823]: I1206 06:48:57.065081 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b","Type":"ContainerDied","Data":"8d8a29a40945af76f425c575651edf8841f00886c20d76a18950c7014345d1a3"} Dec 06 06:48:57 crc kubenswrapper[4823]: I1206 06:48:57.065118 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b","Type":"ContainerDied","Data":"06b7d73e5e4119a6c364dd354a79ccdb3500a1f5844045316fd41c5dbdfbc461"} Dec 06 06:48:57 crc kubenswrapper[4823]: I1206 06:48:57.079795 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24","Type":"ContainerStarted","Data":"a4e17928e9000d1acf4e3bfb3e017844fd6465033d19cf6468e9e864f315375a"} Dec 06 06:48:57 crc kubenswrapper[4823]: I1206 06:48:57.084053 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.084028521 podStartE2EDuration="3.084028521s" podCreationTimestamp="2025-12-06 06:48:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:48:57.071568871 +0000 UTC m=+1438.357320841" watchObservedRunningTime="2025-12-06 06:48:57.084028521 +0000 UTC m=+1438.369780481" Dec 06 06:48:57 crc kubenswrapper[4823]: I1206 06:48:57.115148 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l6rbl" podUID="06a9a9aa-962e-4cf1-afe9-b831a56f3837" containerName="registry-server" probeResult="failure" output=< Dec 06 06:48:57 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Dec 06 06:48:57 crc kubenswrapper[4823]: > Dec 06 06:48:57 crc kubenswrapper[4823]: I1206 06:48:57.131826 4823 scope.go:117] "RemoveContainer" containerID="6a72b8947a46f7bf870e3d45ac3ddad3f60a11982084a374c762d6001905106a" Dec 06 06:48:57 crc kubenswrapper[4823]: I1206 06:48:57.165969 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-dc699456d-7slk7"] Dec 06 06:48:57 crc kubenswrapper[4823]: I1206 06:48:57.166018 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-dc699456d-7slk7"] Dec 06 06:48:58 crc kubenswrapper[4823]: I1206 06:48:58.017932 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 06:48:58 crc kubenswrapper[4823]: I1206 06:48:58.018572 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c924dcca-1855-4731-bc6f-f3ca3bf51e8b" containerName="glance-log" containerID="cri-o://95287a7fb812b9dab7889b0fbae606485a89d976d843278d9dd77a273a03559b" gracePeriod=30 Dec 06 06:48:58 crc kubenswrapper[4823]: I1206 06:48:58.018707 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c924dcca-1855-4731-bc6f-f3ca3bf51e8b" containerName="glance-httpd" containerID="cri-o://ea16d4202a33a9f813961af832cfb338246a872e432ec85b6319c6f1cd0c63c0" gracePeriod=30 Dec 06 06:48:59 crc kubenswrapper[4823]: I1206 06:48:59.119277 4823 generic.go:334] "Generic (PLEG): container finished" podID="c924dcca-1855-4731-bc6f-f3ca3bf51e8b" containerID="95287a7fb812b9dab7889b0fbae606485a89d976d843278d9dd77a273a03559b" exitCode=143 Dec 06 06:48:59 crc kubenswrapper[4823]: I1206 06:48:59.119335 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c924dcca-1855-4731-bc6f-f3ca3bf51e8b","Type":"ContainerDied","Data":"95287a7fb812b9dab7889b0fbae606485a89d976d843278d9dd77a273a03559b"} Dec 06 06:48:59 crc kubenswrapper[4823]: I1206 06:48:59.165042 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15372216-fc04-44d8-8268-7dbf3b74eeb7" path="/var/lib/kubelet/pods/15372216-fc04-44d8-8268-7dbf3b74eeb7/volumes" Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.131651 4823 generic.go:334] "Generic (PLEG): container finished" podID="c924dcca-1855-4731-bc6f-f3ca3bf51e8b" containerID="ea16d4202a33a9f813961af832cfb338246a872e432ec85b6319c6f1cd0c63c0" exitCode=0 Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.131701 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c924dcca-1855-4731-bc6f-f3ca3bf51e8b","Type":"ContainerDied","Data":"ea16d4202a33a9f813961af832cfb338246a872e432ec85b6319c6f1cd0c63c0"} Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.205231 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.205324 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.238710 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.240642 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.352459 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-httpd-run\") pod \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.352556 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-combined-ca-bundle\") pod \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.352618 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-internal-tls-certs\") pod \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.352639 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-config-data\") pod \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.352769 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-scripts\") pod \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.352834 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-logs\") pod \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.352889 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.352926 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj595\" (UniqueName: \"kubernetes.io/projected/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-kube-api-access-zj595\") pod \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\" (UID: \"c924dcca-1855-4731-bc6f-f3ca3bf51e8b\") " Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.354295 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c924dcca-1855-4731-bc6f-f3ca3bf51e8b" (UID: "c924dcca-1855-4731-bc6f-f3ca3bf51e8b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.356230 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-logs" (OuterVolumeSpecName: "logs") pod "c924dcca-1855-4731-bc6f-f3ca3bf51e8b" (UID: "c924dcca-1855-4731-bc6f-f3ca3bf51e8b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.362633 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "c924dcca-1855-4731-bc6f-f3ca3bf51e8b" (UID: "c924dcca-1855-4731-bc6f-f3ca3bf51e8b"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.366747 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-kube-api-access-zj595" (OuterVolumeSpecName: "kube-api-access-zj595") pod "c924dcca-1855-4731-bc6f-f3ca3bf51e8b" (UID: "c924dcca-1855-4731-bc6f-f3ca3bf51e8b"). InnerVolumeSpecName "kube-api-access-zj595". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.368990 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-scripts" (OuterVolumeSpecName: "scripts") pod "c924dcca-1855-4731-bc6f-f3ca3bf51e8b" (UID: "c924dcca-1855-4731-bc6f-f3ca3bf51e8b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.404551 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c924dcca-1855-4731-bc6f-f3ca3bf51e8b" (UID: "c924dcca-1855-4731-bc6f-f3ca3bf51e8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.453264 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c924dcca-1855-4731-bc6f-f3ca3bf51e8b" (UID: "c924dcca-1855-4731-bc6f-f3ca3bf51e8b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.455888 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zj595\" (UniqueName: \"kubernetes.io/projected/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-kube-api-access-zj595\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.455934 4823 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.455948 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.455959 4823 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.455973 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.455987 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-logs\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.456026 4823 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.463713 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-config-data" (OuterVolumeSpecName: "config-data") pod "c924dcca-1855-4731-bc6f-f3ca3bf51e8b" (UID: "c924dcca-1855-4731-bc6f-f3ca3bf51e8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.483625 4823 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.558300 4823 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:00 crc kubenswrapper[4823]: I1206 06:49:00.558343 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c924dcca-1855-4731-bc6f-f3ca3bf51e8b-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.145485 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.161480 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c924dcca-1855-4731-bc6f-f3ca3bf51e8b","Type":"ContainerDied","Data":"f308caa85bef51791627bda93f4a298a80bc91ba20e930cf9d73e52c8804550e"} Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.161545 4823 scope.go:117] "RemoveContainer" containerID="ea16d4202a33a9f813961af832cfb338246a872e432ec85b6319c6f1cd0c63c0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.198834 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.201096 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.210959 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.228600 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 06:49:01 crc kubenswrapper[4823]: E1206 06:49:01.229083 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15372216-fc04-44d8-8268-7dbf3b74eeb7" containerName="neutron-httpd" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.229101 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="15372216-fc04-44d8-8268-7dbf3b74eeb7" containerName="neutron-httpd" Dec 06 06:49:01 crc kubenswrapper[4823]: E1206 06:49:01.229109 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c924dcca-1855-4731-bc6f-f3ca3bf51e8b" containerName="glance-log" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.229115 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c924dcca-1855-4731-bc6f-f3ca3bf51e8b" containerName="glance-log" Dec 06 06:49:01 crc kubenswrapper[4823]: E1206 06:49:01.229144 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15372216-fc04-44d8-8268-7dbf3b74eeb7" containerName="neutron-api" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.229153 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="15372216-fc04-44d8-8268-7dbf3b74eeb7" containerName="neutron-api" Dec 06 06:49:01 crc kubenswrapper[4823]: E1206 06:49:01.229167 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c924dcca-1855-4731-bc6f-f3ca3bf51e8b" containerName="glance-httpd" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.229173 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c924dcca-1855-4731-bc6f-f3ca3bf51e8b" containerName="glance-httpd" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.229382 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c924dcca-1855-4731-bc6f-f3ca3bf51e8b" containerName="glance-httpd" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.229400 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c924dcca-1855-4731-bc6f-f3ca3bf51e8b" containerName="glance-log" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.229409 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="15372216-fc04-44d8-8268-7dbf3b74eeb7" containerName="neutron-httpd" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.229432 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="15372216-fc04-44d8-8268-7dbf3b74eeb7" containerName="neutron-api" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.230180 4823 scope.go:117] "RemoveContainer" containerID="95287a7fb812b9dab7889b0fbae606485a89d976d843278d9dd77a273a03559b" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.233123 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.237717 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.237948 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.334013 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.352493 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.400154 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/780274e1-f304-47a7-81ad-933887d54459-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"780274e1-f304-47a7-81ad-933887d54459\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.400269 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"780274e1-f304-47a7-81ad-933887d54459\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.400304 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/780274e1-f304-47a7-81ad-933887d54459-config-data\") pod \"glance-default-internal-api-0\" (UID: \"780274e1-f304-47a7-81ad-933887d54459\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.400326 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/780274e1-f304-47a7-81ad-933887d54459-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"780274e1-f304-47a7-81ad-933887d54459\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.400371 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwm44\" (UniqueName: \"kubernetes.io/projected/780274e1-f304-47a7-81ad-933887d54459-kube-api-access-xwm44\") pod \"glance-default-internal-api-0\" (UID: \"780274e1-f304-47a7-81ad-933887d54459\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.400413 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/780274e1-f304-47a7-81ad-933887d54459-logs\") pod \"glance-default-internal-api-0\" (UID: \"780274e1-f304-47a7-81ad-933887d54459\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.400519 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/780274e1-f304-47a7-81ad-933887d54459-scripts\") pod \"glance-default-internal-api-0\" (UID: \"780274e1-f304-47a7-81ad-933887d54459\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.400597 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/780274e1-f304-47a7-81ad-933887d54459-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"780274e1-f304-47a7-81ad-933887d54459\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.502613 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwm44\" (UniqueName: \"kubernetes.io/projected/780274e1-f304-47a7-81ad-933887d54459-kube-api-access-xwm44\") pod \"glance-default-internal-api-0\" (UID: \"780274e1-f304-47a7-81ad-933887d54459\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.502727 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/780274e1-f304-47a7-81ad-933887d54459-logs\") pod \"glance-default-internal-api-0\" (UID: \"780274e1-f304-47a7-81ad-933887d54459\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.502822 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/780274e1-f304-47a7-81ad-933887d54459-scripts\") pod \"glance-default-internal-api-0\" (UID: \"780274e1-f304-47a7-81ad-933887d54459\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.502874 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/780274e1-f304-47a7-81ad-933887d54459-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"780274e1-f304-47a7-81ad-933887d54459\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.502913 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/780274e1-f304-47a7-81ad-933887d54459-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"780274e1-f304-47a7-81ad-933887d54459\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.502961 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"780274e1-f304-47a7-81ad-933887d54459\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.502991 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/780274e1-f304-47a7-81ad-933887d54459-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"780274e1-f304-47a7-81ad-933887d54459\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.503016 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/780274e1-f304-47a7-81ad-933887d54459-config-data\") pod \"glance-default-internal-api-0\" (UID: \"780274e1-f304-47a7-81ad-933887d54459\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.503351 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"780274e1-f304-47a7-81ad-933887d54459\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.503447 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/780274e1-f304-47a7-81ad-933887d54459-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"780274e1-f304-47a7-81ad-933887d54459\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.503458 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/780274e1-f304-47a7-81ad-933887d54459-logs\") pod \"glance-default-internal-api-0\" (UID: \"780274e1-f304-47a7-81ad-933887d54459\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.507941 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/780274e1-f304-47a7-81ad-933887d54459-scripts\") pod \"glance-default-internal-api-0\" (UID: \"780274e1-f304-47a7-81ad-933887d54459\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.508445 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/780274e1-f304-47a7-81ad-933887d54459-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"780274e1-f304-47a7-81ad-933887d54459\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.510199 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/780274e1-f304-47a7-81ad-933887d54459-config-data\") pod \"glance-default-internal-api-0\" (UID: \"780274e1-f304-47a7-81ad-933887d54459\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.514434 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/780274e1-f304-47a7-81ad-933887d54459-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"780274e1-f304-47a7-81ad-933887d54459\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.527113 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwm44\" (UniqueName: \"kubernetes.io/projected/780274e1-f304-47a7-81ad-933887d54459-kube-api-access-xwm44\") pod \"glance-default-internal-api-0\" (UID: \"780274e1-f304-47a7-81ad-933887d54459\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.544791 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"780274e1-f304-47a7-81ad-933887d54459\") " pod="openstack/glance-default-internal-api-0" Dec 06 06:49:01 crc kubenswrapper[4823]: I1206 06:49:01.583251 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 06:49:02 crc kubenswrapper[4823]: I1206 06:49:02.190808 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 06:49:03 crc kubenswrapper[4823]: I1206 06:49:03.154158 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c924dcca-1855-4731-bc6f-f3ca3bf51e8b" path="/var/lib/kubelet/pods/c924dcca-1855-4731-bc6f-f3ca3bf51e8b/volumes" Dec 06 06:49:03 crc kubenswrapper[4823]: I1206 06:49:03.169611 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" containerName="watcher-decision-engine" containerID="cri-o://a4e17928e9000d1acf4e3bfb3e017844fd6465033d19cf6468e9e864f315375a" gracePeriod=30 Dec 06 06:49:03 crc kubenswrapper[4823]: I1206 06:49:03.170017 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"780274e1-f304-47a7-81ad-933887d54459","Type":"ContainerStarted","Data":"932ed043116d5a5030e57f7cadd0cd3d11045e5e88e8cb716d25bc5268c60981"} Dec 06 06:49:04 crc kubenswrapper[4823]: I1206 06:49:04.181112 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"780274e1-f304-47a7-81ad-933887d54459","Type":"ContainerStarted","Data":"7041c4981778ec41ee8f01240b2939716d9afc275fdb76f1efdcc789226cd599"} Dec 06 06:49:04 crc kubenswrapper[4823]: I1206 06:49:04.537876 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 06 06:49:04 crc kubenswrapper[4823]: I1206 06:49:04.538212 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 06 06:49:04 crc kubenswrapper[4823]: I1206 06:49:04.571415 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 06 06:49:04 crc kubenswrapper[4823]: I1206 06:49:04.587220 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.383451 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.384595 4823 generic.go:334] "Generic (PLEG): container finished" podID="738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" containerID="a4e17928e9000d1acf4e3bfb3e017844fd6465033d19cf6468e9e864f315375a" exitCode=0 Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.384649 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24","Type":"ContainerDied","Data":"a4e17928e9000d1acf4e3bfb3e017844fd6465033d19cf6468e9e864f315375a"} Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.384691 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24","Type":"ContainerDied","Data":"d46bf047a86f22df7184a502a31f786f0a37dee87863d1b0faeec583fc90e8e8"} Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.384710 4823 scope.go:117] "RemoveContainer" containerID="a4e17928e9000d1acf4e3bfb3e017844fd6465033d19cf6468e9e864f315375a" Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.413787 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"780274e1-f304-47a7-81ad-933887d54459","Type":"ContainerStarted","Data":"748439ef53d1c815ac1d2f1f3bc001a74b7c8396fc3b87b61331808bfb688310"} Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.414365 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.414458 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.458836 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.458810654 podStartE2EDuration="4.458810654s" podCreationTimestamp="2025-12-06 06:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:49:05.457420554 +0000 UTC m=+1446.743172524" watchObservedRunningTime="2025-12-06 06:49:05.458810654 +0000 UTC m=+1446.744562614" Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.545073 4823 scope.go:117] "RemoveContainer" containerID="0361cfd70d4afe6b8321a2257452d1edf130b9372a99ab20db5c162575da66fd" Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.555203 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-custom-prometheus-ca\") pod \"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24\" (UID: \"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24\") " Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.555357 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-combined-ca-bundle\") pod \"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24\" (UID: \"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24\") " Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.555402 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-config-data\") pod \"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24\" (UID: \"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24\") " Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.555629 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjzmv\" (UniqueName: \"kubernetes.io/projected/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-kube-api-access-tjzmv\") pod \"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24\" (UID: \"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24\") " Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.555750 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-logs\") pod \"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24\" (UID: \"738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24\") " Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.559930 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-logs" (OuterVolumeSpecName: "logs") pod "738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" (UID: "738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.569098 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-kube-api-access-tjzmv" (OuterVolumeSpecName: "kube-api-access-tjzmv") pod "738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" (UID: "738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24"). InnerVolumeSpecName "kube-api-access-tjzmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.600901 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" (UID: "738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.601178 4823 scope.go:117] "RemoveContainer" containerID="a4e17928e9000d1acf4e3bfb3e017844fd6465033d19cf6468e9e864f315375a" Dec 06 06:49:05 crc kubenswrapper[4823]: E1206 06:49:05.602973 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4e17928e9000d1acf4e3bfb3e017844fd6465033d19cf6468e9e864f315375a\": container with ID starting with a4e17928e9000d1acf4e3bfb3e017844fd6465033d19cf6468e9e864f315375a not found: ID does not exist" containerID="a4e17928e9000d1acf4e3bfb3e017844fd6465033d19cf6468e9e864f315375a" Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.603052 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4e17928e9000d1acf4e3bfb3e017844fd6465033d19cf6468e9e864f315375a"} err="failed to get container status \"a4e17928e9000d1acf4e3bfb3e017844fd6465033d19cf6468e9e864f315375a\": rpc error: code = NotFound desc = could not find container \"a4e17928e9000d1acf4e3bfb3e017844fd6465033d19cf6468e9e864f315375a\": container with ID starting with a4e17928e9000d1acf4e3bfb3e017844fd6465033d19cf6468e9e864f315375a not found: ID does not exist" Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.603105 4823 scope.go:117] "RemoveContainer" containerID="0361cfd70d4afe6b8321a2257452d1edf130b9372a99ab20db5c162575da66fd" Dec 06 06:49:05 crc kubenswrapper[4823]: E1206 06:49:05.604798 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0361cfd70d4afe6b8321a2257452d1edf130b9372a99ab20db5c162575da66fd\": container with ID starting with 0361cfd70d4afe6b8321a2257452d1edf130b9372a99ab20db5c162575da66fd not found: ID does not exist" containerID="0361cfd70d4afe6b8321a2257452d1edf130b9372a99ab20db5c162575da66fd" Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.604831 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0361cfd70d4afe6b8321a2257452d1edf130b9372a99ab20db5c162575da66fd"} err="failed to get container status \"0361cfd70d4afe6b8321a2257452d1edf130b9372a99ab20db5c162575da66fd\": rpc error: code = NotFound desc = could not find container \"0361cfd70d4afe6b8321a2257452d1edf130b9372a99ab20db5c162575da66fd\": container with ID starting with 0361cfd70d4afe6b8321a2257452d1edf130b9372a99ab20db5c162575da66fd not found: ID does not exist" Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.613476 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" (UID: "738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.638548 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-config-data" (OuterVolumeSpecName: "config-data") pod "738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" (UID: "738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.658790 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.658854 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.658883 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjzmv\" (UniqueName: \"kubernetes.io/projected/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-kube-api-access-tjzmv\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.658972 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-logs\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:05 crc kubenswrapper[4823]: I1206 06:49:05.658991 4823 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.105414 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l6rbl" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.179209 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l6rbl" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.360816 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l6rbl"] Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.424598 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.430920 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6mg4v" event={"ID":"e2d768b1-0912-4fb7-8bc8-408233b3af09","Type":"ContainerStarted","Data":"c895c98f6cf98e6a81debec5e4a9e88b0cdccf1126182e0c068a647d8dc21d28"} Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.473645 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-6mg4v" podStartSLOduration=6.7453544690000005 podStartE2EDuration="39.473617063s" podCreationTimestamp="2025-12-06 06:48:27 +0000 UTC" firstStartedPulling="2025-12-06 06:48:32.002921598 +0000 UTC m=+1413.288673558" lastFinishedPulling="2025-12-06 06:49:04.731184192 +0000 UTC m=+1446.016936152" observedRunningTime="2025-12-06 06:49:06.452967036 +0000 UTC m=+1447.738719006" watchObservedRunningTime="2025-12-06 06:49:06.473617063 +0000 UTC m=+1447.759369023" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.489447 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.500745 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.511184 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 06 06:49:06 crc kubenswrapper[4823]: E1206 06:49:06.511908 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" containerName="watcher-decision-engine" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.511932 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" containerName="watcher-decision-engine" Dec 06 06:49:06 crc kubenswrapper[4823]: E1206 06:49:06.511958 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" containerName="watcher-decision-engine" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.511967 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" containerName="watcher-decision-engine" Dec 06 06:49:06 crc kubenswrapper[4823]: E1206 06:49:06.511984 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" containerName="watcher-decision-engine" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.511991 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" containerName="watcher-decision-engine" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.512228 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" containerName="watcher-decision-engine" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.512248 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" containerName="watcher-decision-engine" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.512266 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" containerName="watcher-decision-engine" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.512279 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" containerName="watcher-decision-engine" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.513246 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.519487 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.541862 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.641216 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4327f7bf-de7b-4f43-adae-01332719c72d-config-data\") pod \"watcher-decision-engine-0\" (UID: \"4327f7bf-de7b-4f43-adae-01332719c72d\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.641337 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4327f7bf-de7b-4f43-adae-01332719c72d-logs\") pod \"watcher-decision-engine-0\" (UID: \"4327f7bf-de7b-4f43-adae-01332719c72d\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.641434 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4327f7bf-de7b-4f43-adae-01332719c72d-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"4327f7bf-de7b-4f43-adae-01332719c72d\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.641481 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr9zp\" (UniqueName: \"kubernetes.io/projected/4327f7bf-de7b-4f43-adae-01332719c72d-kube-api-access-dr9zp\") pod \"watcher-decision-engine-0\" (UID: \"4327f7bf-de7b-4f43-adae-01332719c72d\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.641545 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4327f7bf-de7b-4f43-adae-01332719c72d-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"4327f7bf-de7b-4f43-adae-01332719c72d\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.744201 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4327f7bf-de7b-4f43-adae-01332719c72d-config-data\") pod \"watcher-decision-engine-0\" (UID: \"4327f7bf-de7b-4f43-adae-01332719c72d\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.744685 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4327f7bf-de7b-4f43-adae-01332719c72d-logs\") pod \"watcher-decision-engine-0\" (UID: \"4327f7bf-de7b-4f43-adae-01332719c72d\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.744881 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4327f7bf-de7b-4f43-adae-01332719c72d-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"4327f7bf-de7b-4f43-adae-01332719c72d\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.745067 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr9zp\" (UniqueName: \"kubernetes.io/projected/4327f7bf-de7b-4f43-adae-01332719c72d-kube-api-access-dr9zp\") pod \"watcher-decision-engine-0\" (UID: \"4327f7bf-de7b-4f43-adae-01332719c72d\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.745212 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4327f7bf-de7b-4f43-adae-01332719c72d-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"4327f7bf-de7b-4f43-adae-01332719c72d\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.745493 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4327f7bf-de7b-4f43-adae-01332719c72d-logs\") pod \"watcher-decision-engine-0\" (UID: \"4327f7bf-de7b-4f43-adae-01332719c72d\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.750744 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4327f7bf-de7b-4f43-adae-01332719c72d-config-data\") pod \"watcher-decision-engine-0\" (UID: \"4327f7bf-de7b-4f43-adae-01332719c72d\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.752248 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4327f7bf-de7b-4f43-adae-01332719c72d-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"4327f7bf-de7b-4f43-adae-01332719c72d\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.754283 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4327f7bf-de7b-4f43-adae-01332719c72d-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"4327f7bf-de7b-4f43-adae-01332719c72d\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.782742 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr9zp\" (UniqueName: \"kubernetes.io/projected/4327f7bf-de7b-4f43-adae-01332719c72d-kube-api-access-dr9zp\") pod \"watcher-decision-engine-0\" (UID: \"4327f7bf-de7b-4f43-adae-01332719c72d\") " pod="openstack/watcher-decision-engine-0" Dec 06 06:49:06 crc kubenswrapper[4823]: I1206 06:49:06.839199 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 06 06:49:07 crc kubenswrapper[4823]: I1206 06:49:07.165724 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" path="/var/lib/kubelet/pods/738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24/volumes" Dec 06 06:49:07 crc kubenswrapper[4823]: W1206 06:49:07.378912 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4327f7bf_de7b_4f43_adae_01332719c72d.slice/crio-5b3bc7d0c33517d45fc31a0c4842f8c0e1cbd624bda36970651fb3e78ef98ee5 WatchSource:0}: Error finding container 5b3bc7d0c33517d45fc31a0c4842f8c0e1cbd624bda36970651fb3e78ef98ee5: Status 404 returned error can't find the container with id 5b3bc7d0c33517d45fc31a0c4842f8c0e1cbd624bda36970651fb3e78ef98ee5 Dec 06 06:49:07 crc kubenswrapper[4823]: I1206 06:49:07.384146 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 06 06:49:07 crc kubenswrapper[4823]: I1206 06:49:07.444898 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4327f7bf-de7b-4f43-adae-01332719c72d","Type":"ContainerStarted","Data":"5b3bc7d0c33517d45fc31a0c4842f8c0e1cbd624bda36970651fb3e78ef98ee5"} Dec 06 06:49:07 crc kubenswrapper[4823]: I1206 06:49:07.448644 4823 generic.go:334] "Generic (PLEG): container finished" podID="a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" containerID="e830bbfb7aa76dda11746510ecb1364dd097796b79d2e436454caa7075bbbb44" exitCode=0 Dec 06 06:49:07 crc kubenswrapper[4823]: I1206 06:49:07.448970 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l6rbl" podUID="06a9a9aa-962e-4cf1-afe9-b831a56f3837" containerName="registry-server" containerID="cri-o://f3a6d7d985847bd412a6a515b3d066c56093cf5f894e1356004663d635be335f" gracePeriod=2 Dec 06 06:49:07 crc kubenswrapper[4823]: I1206 06:49:07.449078 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b","Type":"ContainerDied","Data":"e830bbfb7aa76dda11746510ecb1364dd097796b79d2e436454caa7075bbbb44"} Dec 06 06:49:07 crc kubenswrapper[4823]: I1206 06:49:07.449179 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 06:49:07 crc kubenswrapper[4823]: I1206 06:49:07.449195 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 06:49:08 crc kubenswrapper[4823]: I1206 06:49:08.028962 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 06 06:49:08 crc kubenswrapper[4823]: I1206 06:49:08.062679 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 06 06:49:08 crc kubenswrapper[4823]: I1206 06:49:08.472621 4823 generic.go:334] "Generic (PLEG): container finished" podID="06a9a9aa-962e-4cf1-afe9-b831a56f3837" containerID="f3a6d7d985847bd412a6a515b3d066c56093cf5f894e1356004663d635be335f" exitCode=0 Dec 06 06:49:08 crc kubenswrapper[4823]: I1206 06:49:08.472830 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6rbl" event={"ID":"06a9a9aa-962e-4cf1-afe9-b831a56f3837","Type":"ContainerDied","Data":"f3a6d7d985847bd412a6a515b3d066c56093cf5f894e1356004663d635be335f"} Dec 06 06:49:08 crc kubenswrapper[4823]: I1206 06:49:08.476744 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4327f7bf-de7b-4f43-adae-01332719c72d","Type":"ContainerStarted","Data":"3fabce9ab52cf5700192b245633be698dc3cfee8414893078a6def29b0c8a109"} Dec 06 06:49:08 crc kubenswrapper[4823]: I1206 06:49:08.589753 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l6rbl" Dec 06 06:49:08 crc kubenswrapper[4823]: I1206 06:49:08.688410 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06a9a9aa-962e-4cf1-afe9-b831a56f3837-catalog-content\") pod \"06a9a9aa-962e-4cf1-afe9-b831a56f3837\" (UID: \"06a9a9aa-962e-4cf1-afe9-b831a56f3837\") " Dec 06 06:49:08 crc kubenswrapper[4823]: I1206 06:49:08.688656 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06a9a9aa-962e-4cf1-afe9-b831a56f3837-utilities\") pod \"06a9a9aa-962e-4cf1-afe9-b831a56f3837\" (UID: \"06a9a9aa-962e-4cf1-afe9-b831a56f3837\") " Dec 06 06:49:08 crc kubenswrapper[4823]: I1206 06:49:08.688887 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffmv5\" (UniqueName: \"kubernetes.io/projected/06a9a9aa-962e-4cf1-afe9-b831a56f3837-kube-api-access-ffmv5\") pod \"06a9a9aa-962e-4cf1-afe9-b831a56f3837\" (UID: \"06a9a9aa-962e-4cf1-afe9-b831a56f3837\") " Dec 06 06:49:08 crc kubenswrapper[4823]: I1206 06:49:08.713168 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06a9a9aa-962e-4cf1-afe9-b831a56f3837-utilities" (OuterVolumeSpecName: "utilities") pod "06a9a9aa-962e-4cf1-afe9-b831a56f3837" (UID: "06a9a9aa-962e-4cf1-afe9-b831a56f3837"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:49:08 crc kubenswrapper[4823]: I1206 06:49:08.737901 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06a9a9aa-962e-4cf1-afe9-b831a56f3837-kube-api-access-ffmv5" (OuterVolumeSpecName: "kube-api-access-ffmv5") pod "06a9a9aa-962e-4cf1-afe9-b831a56f3837" (UID: "06a9a9aa-962e-4cf1-afe9-b831a56f3837"). InnerVolumeSpecName "kube-api-access-ffmv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:49:08 crc kubenswrapper[4823]: I1206 06:49:08.791670 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06a9a9aa-962e-4cf1-afe9-b831a56f3837-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:08 crc kubenswrapper[4823]: I1206 06:49:08.791711 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffmv5\" (UniqueName: \"kubernetes.io/projected/06a9a9aa-962e-4cf1-afe9-b831a56f3837-kube-api-access-ffmv5\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:08 crc kubenswrapper[4823]: I1206 06:49:08.839553 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06a9a9aa-962e-4cf1-afe9-b831a56f3837-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06a9a9aa-962e-4cf1-afe9-b831a56f3837" (UID: "06a9a9aa-962e-4cf1-afe9-b831a56f3837"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:49:08 crc kubenswrapper[4823]: I1206 06:49:08.894375 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06a9a9aa-962e-4cf1-afe9-b831a56f3837-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:09 crc kubenswrapper[4823]: I1206 06:49:09.492355 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6rbl" event={"ID":"06a9a9aa-962e-4cf1-afe9-b831a56f3837","Type":"ContainerDied","Data":"e294750f8d5d730431ba90a69f25e9fb8438a82381a65ac2c19e664a7d4c8e87"} Dec 06 06:49:09 crc kubenswrapper[4823]: I1206 06:49:09.492458 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l6rbl" Dec 06 06:49:09 crc kubenswrapper[4823]: I1206 06:49:09.492462 4823 scope.go:117] "RemoveContainer" containerID="f3a6d7d985847bd412a6a515b3d066c56093cf5f894e1356004663d635be335f" Dec 06 06:49:09 crc kubenswrapper[4823]: I1206 06:49:09.512860 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=3.512833918 podStartE2EDuration="3.512833918s" podCreationTimestamp="2025-12-06 06:49:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:49:09.512508338 +0000 UTC m=+1450.798260328" watchObservedRunningTime="2025-12-06 06:49:09.512833918 +0000 UTC m=+1450.798585878" Dec 06 06:49:09 crc kubenswrapper[4823]: I1206 06:49:09.536430 4823 scope.go:117] "RemoveContainer" containerID="ca79f5c53bc35cea9cdd800b13b35e7845a2f3cec859d3f9d0a9661b9229449e" Dec 06 06:49:09 crc kubenswrapper[4823]: I1206 06:49:09.540328 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l6rbl"] Dec 06 06:49:09 crc kubenswrapper[4823]: I1206 06:49:09.551367 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l6rbl"] Dec 06 06:49:09 crc kubenswrapper[4823]: I1206 06:49:09.562688 4823 scope.go:117] "RemoveContainer" containerID="8cf09828a572b82e600434c6287dcccf99a2ff2fed74e2cf6ca73faa82960110" Dec 06 06:49:11 crc kubenswrapper[4823]: I1206 06:49:11.152571 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06a9a9aa-962e-4cf1-afe9-b831a56f3837" path="/var/lib/kubelet/pods/06a9a9aa-962e-4cf1-afe9-b831a56f3837/volumes" Dec 06 06:49:11 crc kubenswrapper[4823]: I1206 06:49:11.584361 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 06 06:49:11 crc kubenswrapper[4823]: I1206 06:49:11.584443 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 06 06:49:11 crc kubenswrapper[4823]: I1206 06:49:11.621151 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 06 06:49:11 crc kubenswrapper[4823]: I1206 06:49:11.637760 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 06 06:49:11 crc kubenswrapper[4823]: I1206 06:49:11.734488 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 06 06:49:12 crc kubenswrapper[4823]: I1206 06:49:12.528171 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 06 06:49:12 crc kubenswrapper[4823]: I1206 06:49:12.528221 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 06 06:49:14 crc kubenswrapper[4823]: I1206 06:49:14.796617 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 06 06:49:14 crc kubenswrapper[4823]: I1206 06:49:14.797182 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 06:49:14 crc kubenswrapper[4823]: I1206 06:49:14.811002 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 06 06:49:16 crc kubenswrapper[4823]: I1206 06:49:16.840307 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 06 06:49:16 crc kubenswrapper[4823]: I1206 06:49:16.881627 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Dec 06 06:49:17 crc kubenswrapper[4823]: I1206 06:49:17.603306 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 06 06:49:17 crc kubenswrapper[4823]: I1206 06:49:17.644398 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Dec 06 06:49:22 crc kubenswrapper[4823]: I1206 06:49:22.654119 4823 generic.go:334] "Generic (PLEG): container finished" podID="e2d768b1-0912-4fb7-8bc8-408233b3af09" containerID="c895c98f6cf98e6a81debec5e4a9e88b0cdccf1126182e0c068a647d8dc21d28" exitCode=0 Dec 06 06:49:22 crc kubenswrapper[4823]: I1206 06:49:22.654247 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6mg4v" event={"ID":"e2d768b1-0912-4fb7-8bc8-408233b3af09","Type":"ContainerDied","Data":"c895c98f6cf98e6a81debec5e4a9e88b0cdccf1126182e0c068a647d8dc21d28"} Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.171548 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6mg4v" Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.371549 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2d768b1-0912-4fb7-8bc8-408233b3af09-scripts\") pod \"e2d768b1-0912-4fb7-8bc8-408233b3af09\" (UID: \"e2d768b1-0912-4fb7-8bc8-408233b3af09\") " Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.371679 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d768b1-0912-4fb7-8bc8-408233b3af09-combined-ca-bundle\") pod \"e2d768b1-0912-4fb7-8bc8-408233b3af09\" (UID: \"e2d768b1-0912-4fb7-8bc8-408233b3af09\") " Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.371723 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v46kl\" (UniqueName: \"kubernetes.io/projected/e2d768b1-0912-4fb7-8bc8-408233b3af09-kube-api-access-v46kl\") pod \"e2d768b1-0912-4fb7-8bc8-408233b3af09\" (UID: \"e2d768b1-0912-4fb7-8bc8-408233b3af09\") " Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.371964 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d768b1-0912-4fb7-8bc8-408233b3af09-config-data\") pod \"e2d768b1-0912-4fb7-8bc8-408233b3af09\" (UID: \"e2d768b1-0912-4fb7-8bc8-408233b3af09\") " Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.383953 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d768b1-0912-4fb7-8bc8-408233b3af09-kube-api-access-v46kl" (OuterVolumeSpecName: "kube-api-access-v46kl") pod "e2d768b1-0912-4fb7-8bc8-408233b3af09" (UID: "e2d768b1-0912-4fb7-8bc8-408233b3af09"). InnerVolumeSpecName "kube-api-access-v46kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.388889 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d768b1-0912-4fb7-8bc8-408233b3af09-scripts" (OuterVolumeSpecName: "scripts") pod "e2d768b1-0912-4fb7-8bc8-408233b3af09" (UID: "e2d768b1-0912-4fb7-8bc8-408233b3af09"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.442921 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d768b1-0912-4fb7-8bc8-408233b3af09-config-data" (OuterVolumeSpecName: "config-data") pod "e2d768b1-0912-4fb7-8bc8-408233b3af09" (UID: "e2d768b1-0912-4fb7-8bc8-408233b3af09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.456420 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d768b1-0912-4fb7-8bc8-408233b3af09-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2d768b1-0912-4fb7-8bc8-408233b3af09" (UID: "e2d768b1-0912-4fb7-8bc8-408233b3af09"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.474569 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2d768b1-0912-4fb7-8bc8-408233b3af09-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.474611 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d768b1-0912-4fb7-8bc8-408233b3af09-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.474623 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v46kl\" (UniqueName: \"kubernetes.io/projected/e2d768b1-0912-4fb7-8bc8-408233b3af09-kube-api-access-v46kl\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.474634 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d768b1-0912-4fb7-8bc8-408233b3af09-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.676423 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6mg4v" event={"ID":"e2d768b1-0912-4fb7-8bc8-408233b3af09","Type":"ContainerDied","Data":"4fd91ac47f4a223b5469f322682097f584c22bd7f3c63078e17b813e6380b248"} Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.676493 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fd91ac47f4a223b5469f322682097f584c22bd7f3c63078e17b813e6380b248" Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.676513 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6mg4v" Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.897016 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 06:49:24 crc kubenswrapper[4823]: E1206 06:49:24.897632 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06a9a9aa-962e-4cf1-afe9-b831a56f3837" containerName="extract-content" Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.897654 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="06a9a9aa-962e-4cf1-afe9-b831a56f3837" containerName="extract-content" Dec 06 06:49:24 crc kubenswrapper[4823]: E1206 06:49:24.897687 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06a9a9aa-962e-4cf1-afe9-b831a56f3837" containerName="registry-server" Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.897694 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="06a9a9aa-962e-4cf1-afe9-b831a56f3837" containerName="registry-server" Dec 06 06:49:24 crc kubenswrapper[4823]: E1206 06:49:24.897719 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d768b1-0912-4fb7-8bc8-408233b3af09" containerName="nova-cell0-conductor-db-sync" Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.897725 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d768b1-0912-4fb7-8bc8-408233b3af09" containerName="nova-cell0-conductor-db-sync" Dec 06 06:49:24 crc kubenswrapper[4823]: E1206 06:49:24.897746 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06a9a9aa-962e-4cf1-afe9-b831a56f3837" containerName="extract-utilities" Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.897753 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="06a9a9aa-962e-4cf1-afe9-b831a56f3837" containerName="extract-utilities" Dec 06 06:49:24 crc kubenswrapper[4823]: E1206 06:49:24.897839 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" containerName="watcher-decision-engine" Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.897847 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="738b7ac5-e5f6-4b8a-8966-9c9e6e5e8c24" containerName="watcher-decision-engine" Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.898065 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="06a9a9aa-962e-4cf1-afe9-b831a56f3837" containerName="registry-server" Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.898082 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d768b1-0912-4fb7-8bc8-408233b3af09" containerName="nova-cell0-conductor-db-sync" Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.898959 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.906162 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.912437 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-jlwgd" Dec 06 06:49:24 crc kubenswrapper[4823]: I1206 06:49:24.921649 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 06:49:25 crc kubenswrapper[4823]: I1206 06:49:25.088205 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a74136e-ea73-45df-b31c-494fa24fecf8-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1a74136e-ea73-45df-b31c-494fa24fecf8\") " pod="openstack/nova-cell0-conductor-0" Dec 06 06:49:25 crc kubenswrapper[4823]: I1206 06:49:25.088311 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpzcj\" (UniqueName: \"kubernetes.io/projected/1a74136e-ea73-45df-b31c-494fa24fecf8-kube-api-access-tpzcj\") pod \"nova-cell0-conductor-0\" (UID: \"1a74136e-ea73-45df-b31c-494fa24fecf8\") " pod="openstack/nova-cell0-conductor-0" Dec 06 06:49:25 crc kubenswrapper[4823]: I1206 06:49:25.088473 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a74136e-ea73-45df-b31c-494fa24fecf8-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1a74136e-ea73-45df-b31c-494fa24fecf8\") " pod="openstack/nova-cell0-conductor-0" Dec 06 06:49:25 crc kubenswrapper[4823]: I1206 06:49:25.191186 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a74136e-ea73-45df-b31c-494fa24fecf8-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1a74136e-ea73-45df-b31c-494fa24fecf8\") " pod="openstack/nova-cell0-conductor-0" Dec 06 06:49:25 crc kubenswrapper[4823]: I1206 06:49:25.191350 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpzcj\" (UniqueName: \"kubernetes.io/projected/1a74136e-ea73-45df-b31c-494fa24fecf8-kube-api-access-tpzcj\") pod \"nova-cell0-conductor-0\" (UID: \"1a74136e-ea73-45df-b31c-494fa24fecf8\") " pod="openstack/nova-cell0-conductor-0" Dec 06 06:49:25 crc kubenswrapper[4823]: I1206 06:49:25.192217 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a74136e-ea73-45df-b31c-494fa24fecf8-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1a74136e-ea73-45df-b31c-494fa24fecf8\") " pod="openstack/nova-cell0-conductor-0" Dec 06 06:49:25 crc kubenswrapper[4823]: I1206 06:49:25.198871 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a74136e-ea73-45df-b31c-494fa24fecf8-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1a74136e-ea73-45df-b31c-494fa24fecf8\") " pod="openstack/nova-cell0-conductor-0" Dec 06 06:49:25 crc kubenswrapper[4823]: I1206 06:49:25.204463 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a74136e-ea73-45df-b31c-494fa24fecf8-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1a74136e-ea73-45df-b31c-494fa24fecf8\") " pod="openstack/nova-cell0-conductor-0" Dec 06 06:49:25 crc kubenswrapper[4823]: I1206 06:49:25.213709 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpzcj\" (UniqueName: \"kubernetes.io/projected/1a74136e-ea73-45df-b31c-494fa24fecf8-kube-api-access-tpzcj\") pod \"nova-cell0-conductor-0\" (UID: \"1a74136e-ea73-45df-b31c-494fa24fecf8\") " pod="openstack/nova-cell0-conductor-0" Dec 06 06:49:25 crc kubenswrapper[4823]: I1206 06:49:25.226604 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 06 06:49:25 crc kubenswrapper[4823]: I1206 06:49:25.734791 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 06:49:25 crc kubenswrapper[4823]: W1206 06:49:25.737806 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a74136e_ea73_45df_b31c_494fa24fecf8.slice/crio-e307a7bcd883e345e1dbb80602a0280589d726c275b79b672c286c74e76ee350 WatchSource:0}: Error finding container e307a7bcd883e345e1dbb80602a0280589d726c275b79b672c286c74e76ee350: Status 404 returned error can't find the container with id e307a7bcd883e345e1dbb80602a0280589d726c275b79b672c286c74e76ee350 Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.466061 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.526753 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-config-data\") pod \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.526829 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgs8q\" (UniqueName: \"kubernetes.io/projected/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-kube-api-access-fgs8q\") pod \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.526893 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-scripts\") pod \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.527843 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-sg-core-conf-yaml\") pod \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.527962 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-log-httpd\") pod \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.528532 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" (UID: "a51a5f16-7cc8-4f50-9bf1-2af84eb4783b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.528542 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-run-httpd\") pod \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.528963 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" (UID: "a51a5f16-7cc8-4f50-9bf1-2af84eb4783b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.529027 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-combined-ca-bundle\") pod \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\" (UID: \"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b\") " Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.532281 4823 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.532339 4823 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.535354 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-scripts" (OuterVolumeSpecName: "scripts") pod "a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" (UID: "a51a5f16-7cc8-4f50-9bf1-2af84eb4783b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.536642 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-kube-api-access-fgs8q" (OuterVolumeSpecName: "kube-api-access-fgs8q") pod "a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" (UID: "a51a5f16-7cc8-4f50-9bf1-2af84eb4783b"). InnerVolumeSpecName "kube-api-access-fgs8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.563559 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" (UID: "a51a5f16-7cc8-4f50-9bf1-2af84eb4783b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.611937 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" (UID: "a51a5f16-7cc8-4f50-9bf1-2af84eb4783b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.634726 4823 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.634770 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.634783 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgs8q\" (UniqueName: \"kubernetes.io/projected/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-kube-api-access-fgs8q\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.634800 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.638867 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-config-data" (OuterVolumeSpecName: "config-data") pod "a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" (UID: "a51a5f16-7cc8-4f50-9bf1-2af84eb4783b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.701230 4823 generic.go:334] "Generic (PLEG): container finished" podID="a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" containerID="b6cf2d8b8d0ce430133d626cc615fbce83ce3cd134f318d1fdb66d54f22531d4" exitCode=137 Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.701305 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b","Type":"ContainerDied","Data":"b6cf2d8b8d0ce430133d626cc615fbce83ce3cd134f318d1fdb66d54f22531d4"} Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.701339 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a51a5f16-7cc8-4f50-9bf1-2af84eb4783b","Type":"ContainerDied","Data":"788b7509bba241891657730f5116a9fd51e71e2b0a98dffe587ab12b2896f1e5"} Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.701377 4823 scope.go:117] "RemoveContainer" containerID="b6cf2d8b8d0ce430133d626cc615fbce83ce3cd134f318d1fdb66d54f22531d4" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.701534 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.708726 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1a74136e-ea73-45df-b31c-494fa24fecf8","Type":"ContainerStarted","Data":"92ee3777efd71b992740eb0dfecc96cb045d8e24d9dd42e8efc42e56e4638001"} Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.708914 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1a74136e-ea73-45df-b31c-494fa24fecf8","Type":"ContainerStarted","Data":"e307a7bcd883e345e1dbb80602a0280589d726c275b79b672c286c74e76ee350"} Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.709163 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.733115 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.7330922429999998 podStartE2EDuration="2.733092243s" podCreationTimestamp="2025-12-06 06:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:49:26.72675399 +0000 UTC m=+1468.012505960" watchObservedRunningTime="2025-12-06 06:49:26.733092243 +0000 UTC m=+1468.018844203" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.736485 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.755538 4823 scope.go:117] "RemoveContainer" containerID="8d8a29a40945af76f425c575651edf8841f00886c20d76a18950c7014345d1a3" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.761177 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.776162 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.797509 4823 scope.go:117] "RemoveContainer" containerID="06b7d73e5e4119a6c364dd354a79ccdb3500a1f5844045316fd41c5dbdfbc461" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.797734 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:49:26 crc kubenswrapper[4823]: E1206 06:49:26.798196 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" containerName="ceilometer-notification-agent" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.798232 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" containerName="ceilometer-notification-agent" Dec 06 06:49:26 crc kubenswrapper[4823]: E1206 06:49:26.798305 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" containerName="proxy-httpd" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.798321 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" containerName="proxy-httpd" Dec 06 06:49:26 crc kubenswrapper[4823]: E1206 06:49:26.798336 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" containerName="ceilometer-central-agent" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.798350 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" containerName="ceilometer-central-agent" Dec 06 06:49:26 crc kubenswrapper[4823]: E1206 06:49:26.798386 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" containerName="sg-core" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.798399 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" containerName="sg-core" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.798795 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" containerName="ceilometer-central-agent" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.798834 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" containerName="sg-core" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.798857 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" containerName="proxy-httpd" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.798889 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" containerName="ceilometer-notification-agent" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.801351 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.805711 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.806006 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.823635 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.840228 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a4cf458-c626-43f1-ac23-1054c38e7645-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " pod="openstack/ceilometer-0" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.840285 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a4cf458-c626-43f1-ac23-1054c38e7645-scripts\") pod \"ceilometer-0\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " pod="openstack/ceilometer-0" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.840326 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s77x\" (UniqueName: \"kubernetes.io/projected/8a4cf458-c626-43f1-ac23-1054c38e7645-kube-api-access-6s77x\") pod \"ceilometer-0\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " pod="openstack/ceilometer-0" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.840382 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a4cf458-c626-43f1-ac23-1054c38e7645-config-data\") pod \"ceilometer-0\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " pod="openstack/ceilometer-0" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.840470 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a4cf458-c626-43f1-ac23-1054c38e7645-run-httpd\") pod \"ceilometer-0\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " pod="openstack/ceilometer-0" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.840495 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a4cf458-c626-43f1-ac23-1054c38e7645-log-httpd\") pod \"ceilometer-0\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " pod="openstack/ceilometer-0" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.840584 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4cf458-c626-43f1-ac23-1054c38e7645-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " pod="openstack/ceilometer-0" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.853074 4823 scope.go:117] "RemoveContainer" containerID="e830bbfb7aa76dda11746510ecb1364dd097796b79d2e436454caa7075bbbb44" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.878513 4823 scope.go:117] "RemoveContainer" containerID="b6cf2d8b8d0ce430133d626cc615fbce83ce3cd134f318d1fdb66d54f22531d4" Dec 06 06:49:26 crc kubenswrapper[4823]: E1206 06:49:26.879127 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6cf2d8b8d0ce430133d626cc615fbce83ce3cd134f318d1fdb66d54f22531d4\": container with ID starting with b6cf2d8b8d0ce430133d626cc615fbce83ce3cd134f318d1fdb66d54f22531d4 not found: ID does not exist" containerID="b6cf2d8b8d0ce430133d626cc615fbce83ce3cd134f318d1fdb66d54f22531d4" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.879184 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6cf2d8b8d0ce430133d626cc615fbce83ce3cd134f318d1fdb66d54f22531d4"} err="failed to get container status \"b6cf2d8b8d0ce430133d626cc615fbce83ce3cd134f318d1fdb66d54f22531d4\": rpc error: code = NotFound desc = could not find container \"b6cf2d8b8d0ce430133d626cc615fbce83ce3cd134f318d1fdb66d54f22531d4\": container with ID starting with b6cf2d8b8d0ce430133d626cc615fbce83ce3cd134f318d1fdb66d54f22531d4 not found: ID does not exist" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.879220 4823 scope.go:117] "RemoveContainer" containerID="8d8a29a40945af76f425c575651edf8841f00886c20d76a18950c7014345d1a3" Dec 06 06:49:26 crc kubenswrapper[4823]: E1206 06:49:26.879852 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d8a29a40945af76f425c575651edf8841f00886c20d76a18950c7014345d1a3\": container with ID starting with 8d8a29a40945af76f425c575651edf8841f00886c20d76a18950c7014345d1a3 not found: ID does not exist" containerID="8d8a29a40945af76f425c575651edf8841f00886c20d76a18950c7014345d1a3" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.879895 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d8a29a40945af76f425c575651edf8841f00886c20d76a18950c7014345d1a3"} err="failed to get container status \"8d8a29a40945af76f425c575651edf8841f00886c20d76a18950c7014345d1a3\": rpc error: code = NotFound desc = could not find container \"8d8a29a40945af76f425c575651edf8841f00886c20d76a18950c7014345d1a3\": container with ID starting with 8d8a29a40945af76f425c575651edf8841f00886c20d76a18950c7014345d1a3 not found: ID does not exist" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.879927 4823 scope.go:117] "RemoveContainer" containerID="06b7d73e5e4119a6c364dd354a79ccdb3500a1f5844045316fd41c5dbdfbc461" Dec 06 06:49:26 crc kubenswrapper[4823]: E1206 06:49:26.880365 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06b7d73e5e4119a6c364dd354a79ccdb3500a1f5844045316fd41c5dbdfbc461\": container with ID starting with 06b7d73e5e4119a6c364dd354a79ccdb3500a1f5844045316fd41c5dbdfbc461 not found: ID does not exist" containerID="06b7d73e5e4119a6c364dd354a79ccdb3500a1f5844045316fd41c5dbdfbc461" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.880444 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06b7d73e5e4119a6c364dd354a79ccdb3500a1f5844045316fd41c5dbdfbc461"} err="failed to get container status \"06b7d73e5e4119a6c364dd354a79ccdb3500a1f5844045316fd41c5dbdfbc461\": rpc error: code = NotFound desc = could not find container \"06b7d73e5e4119a6c364dd354a79ccdb3500a1f5844045316fd41c5dbdfbc461\": container with ID starting with 06b7d73e5e4119a6c364dd354a79ccdb3500a1f5844045316fd41c5dbdfbc461 not found: ID does not exist" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.880499 4823 scope.go:117] "RemoveContainer" containerID="e830bbfb7aa76dda11746510ecb1364dd097796b79d2e436454caa7075bbbb44" Dec 06 06:49:26 crc kubenswrapper[4823]: E1206 06:49:26.881090 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e830bbfb7aa76dda11746510ecb1364dd097796b79d2e436454caa7075bbbb44\": container with ID starting with e830bbfb7aa76dda11746510ecb1364dd097796b79d2e436454caa7075bbbb44 not found: ID does not exist" containerID="e830bbfb7aa76dda11746510ecb1364dd097796b79d2e436454caa7075bbbb44" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.881128 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e830bbfb7aa76dda11746510ecb1364dd097796b79d2e436454caa7075bbbb44"} err="failed to get container status \"e830bbfb7aa76dda11746510ecb1364dd097796b79d2e436454caa7075bbbb44\": rpc error: code = NotFound desc = could not find container \"e830bbfb7aa76dda11746510ecb1364dd097796b79d2e436454caa7075bbbb44\": container with ID starting with e830bbfb7aa76dda11746510ecb1364dd097796b79d2e436454caa7075bbbb44 not found: ID does not exist" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.942985 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4cf458-c626-43f1-ac23-1054c38e7645-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " pod="openstack/ceilometer-0" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.943054 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a4cf458-c626-43f1-ac23-1054c38e7645-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " pod="openstack/ceilometer-0" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.943084 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a4cf458-c626-43f1-ac23-1054c38e7645-scripts\") pod \"ceilometer-0\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " pod="openstack/ceilometer-0" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.943118 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s77x\" (UniqueName: \"kubernetes.io/projected/8a4cf458-c626-43f1-ac23-1054c38e7645-kube-api-access-6s77x\") pod \"ceilometer-0\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " pod="openstack/ceilometer-0" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.943181 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a4cf458-c626-43f1-ac23-1054c38e7645-config-data\") pod \"ceilometer-0\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " pod="openstack/ceilometer-0" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.943551 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a4cf458-c626-43f1-ac23-1054c38e7645-run-httpd\") pod \"ceilometer-0\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " pod="openstack/ceilometer-0" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.943580 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a4cf458-c626-43f1-ac23-1054c38e7645-log-httpd\") pod \"ceilometer-0\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " pod="openstack/ceilometer-0" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.944182 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a4cf458-c626-43f1-ac23-1054c38e7645-log-httpd\") pod \"ceilometer-0\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " pod="openstack/ceilometer-0" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.948050 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a4cf458-c626-43f1-ac23-1054c38e7645-config-data\") pod \"ceilometer-0\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " pod="openstack/ceilometer-0" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.948286 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a4cf458-c626-43f1-ac23-1054c38e7645-run-httpd\") pod \"ceilometer-0\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " pod="openstack/ceilometer-0" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.948423 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a4cf458-c626-43f1-ac23-1054c38e7645-scripts\") pod \"ceilometer-0\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " pod="openstack/ceilometer-0" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.950829 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a4cf458-c626-43f1-ac23-1054c38e7645-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " pod="openstack/ceilometer-0" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.951551 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4cf458-c626-43f1-ac23-1054c38e7645-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " pod="openstack/ceilometer-0" Dec 06 06:49:26 crc kubenswrapper[4823]: I1206 06:49:26.962115 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s77x\" (UniqueName: \"kubernetes.io/projected/8a4cf458-c626-43f1-ac23-1054c38e7645-kube-api-access-6s77x\") pod \"ceilometer-0\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " pod="openstack/ceilometer-0" Dec 06 06:49:27 crc kubenswrapper[4823]: I1206 06:49:27.138762 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 06:49:27 crc kubenswrapper[4823]: I1206 06:49:27.162816 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a51a5f16-7cc8-4f50-9bf1-2af84eb4783b" path="/var/lib/kubelet/pods/a51a5f16-7cc8-4f50-9bf1-2af84eb4783b/volumes" Dec 06 06:49:27 crc kubenswrapper[4823]: I1206 06:49:27.634132 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:49:27 crc kubenswrapper[4823]: I1206 06:49:27.753198 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a4cf458-c626-43f1-ac23-1054c38e7645","Type":"ContainerStarted","Data":"11cbd661d442c055bb4bc49975f706cbbf66ea421eb5381e0bf48ed75ba44144"} Dec 06 06:49:28 crc kubenswrapper[4823]: I1206 06:49:28.779426 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a4cf458-c626-43f1-ac23-1054c38e7645","Type":"ContainerStarted","Data":"8af0c28e888fa4d5793ae5d4e506f85d3334a9e47e2f248128efd535ab7bace7"} Dec 06 06:49:28 crc kubenswrapper[4823]: I1206 06:49:28.780065 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a4cf458-c626-43f1-ac23-1054c38e7645","Type":"ContainerStarted","Data":"78f023ea41c0203f1edc73c59c63a12b5ad719ce0c49b7f8dfcadb9f7d82bc23"} Dec 06 06:49:29 crc kubenswrapper[4823]: I1206 06:49:29.809644 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a4cf458-c626-43f1-ac23-1054c38e7645","Type":"ContainerStarted","Data":"e931905d8f8211e215f09f286239ac6baf8f48ca0741160a528ae6a669d737e3"} Dec 06 06:49:30 crc kubenswrapper[4823]: I1206 06:49:30.277265 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 06 06:49:30 crc kubenswrapper[4823]: I1206 06:49:30.831171 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a4cf458-c626-43f1-ac23-1054c38e7645","Type":"ContainerStarted","Data":"9ef8800e027d1803b9587c5b9e9e16dd2ca3ebdb5568c45d946d1b887723892a"} Dec 06 06:49:30 crc kubenswrapper[4823]: I1206 06:49:30.832988 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.004441 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.510567016 podStartE2EDuration="5.004415804s" podCreationTimestamp="2025-12-06 06:49:26 +0000 UTC" firstStartedPulling="2025-12-06 06:49:27.645074201 +0000 UTC m=+1468.930826161" lastFinishedPulling="2025-12-06 06:49:30.138922989 +0000 UTC m=+1471.424674949" observedRunningTime="2025-12-06 06:49:30.868636252 +0000 UTC m=+1472.154388202" watchObservedRunningTime="2025-12-06 06:49:31.004415804 +0000 UTC m=+1472.290167764" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.005574 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-8mx8j"] Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.007387 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8mx8j" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.010390 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.012827 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.018876 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8mx8j"] Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.137561 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zghcj\" (UniqueName: \"kubernetes.io/projected/36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c-kube-api-access-zghcj\") pod \"nova-cell0-cell-mapping-8mx8j\" (UID: \"36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c\") " pod="openstack/nova-cell0-cell-mapping-8mx8j" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.138077 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c-scripts\") pod \"nova-cell0-cell-mapping-8mx8j\" (UID: \"36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c\") " pod="openstack/nova-cell0-cell-mapping-8mx8j" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.138119 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8mx8j\" (UID: \"36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c\") " pod="openstack/nova-cell0-cell-mapping-8mx8j" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.138158 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c-config-data\") pod \"nova-cell0-cell-mapping-8mx8j\" (UID: \"36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c\") " pod="openstack/nova-cell0-cell-mapping-8mx8j" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.240468 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c-scripts\") pod \"nova-cell0-cell-mapping-8mx8j\" (UID: \"36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c\") " pod="openstack/nova-cell0-cell-mapping-8mx8j" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.240529 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8mx8j\" (UID: \"36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c\") " pod="openstack/nova-cell0-cell-mapping-8mx8j" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.240548 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c-config-data\") pod \"nova-cell0-cell-mapping-8mx8j\" (UID: \"36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c\") " pod="openstack/nova-cell0-cell-mapping-8mx8j" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.240624 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zghcj\" (UniqueName: \"kubernetes.io/projected/36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c-kube-api-access-zghcj\") pod \"nova-cell0-cell-mapping-8mx8j\" (UID: \"36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c\") " pod="openstack/nova-cell0-cell-mapping-8mx8j" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.250192 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8mx8j\" (UID: \"36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c\") " pod="openstack/nova-cell0-cell-mapping-8mx8j" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.250345 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c-config-data\") pod \"nova-cell0-cell-mapping-8mx8j\" (UID: \"36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c\") " pod="openstack/nova-cell0-cell-mapping-8mx8j" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.252832 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c-scripts\") pod \"nova-cell0-cell-mapping-8mx8j\" (UID: \"36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c\") " pod="openstack/nova-cell0-cell-mapping-8mx8j" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.274555 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zghcj\" (UniqueName: \"kubernetes.io/projected/36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c-kube-api-access-zghcj\") pod \"nova-cell0-cell-mapping-8mx8j\" (UID: \"36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c\") " pod="openstack/nova-cell0-cell-mapping-8mx8j" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.318646 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.320893 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.324074 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.331206 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8mx8j" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.344679 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/973019ce-1e95-4284-b2da-bd24519c84d0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"973019ce-1e95-4284-b2da-bd24519c84d0\") " pod="openstack/nova-api-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.344758 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/973019ce-1e95-4284-b2da-bd24519c84d0-logs\") pod \"nova-api-0\" (UID: \"973019ce-1e95-4284-b2da-bd24519c84d0\") " pod="openstack/nova-api-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.344820 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/973019ce-1e95-4284-b2da-bd24519c84d0-config-data\") pod \"nova-api-0\" (UID: \"973019ce-1e95-4284-b2da-bd24519c84d0\") " pod="openstack/nova-api-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.344841 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cskz8\" (UniqueName: \"kubernetes.io/projected/973019ce-1e95-4284-b2da-bd24519c84d0-kube-api-access-cskz8\") pod \"nova-api-0\" (UID: \"973019ce-1e95-4284-b2da-bd24519c84d0\") " pod="openstack/nova-api-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.348385 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.443639 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.448204 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.450629 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/973019ce-1e95-4284-b2da-bd24519c84d0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"973019ce-1e95-4284-b2da-bd24519c84d0\") " pod="openstack/nova-api-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.450767 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/973019ce-1e95-4284-b2da-bd24519c84d0-logs\") pod \"nova-api-0\" (UID: \"973019ce-1e95-4284-b2da-bd24519c84d0\") " pod="openstack/nova-api-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.450851 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/973019ce-1e95-4284-b2da-bd24519c84d0-config-data\") pod \"nova-api-0\" (UID: \"973019ce-1e95-4284-b2da-bd24519c84d0\") " pod="openstack/nova-api-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.450883 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cskz8\" (UniqueName: \"kubernetes.io/projected/973019ce-1e95-4284-b2da-bd24519c84d0-kube-api-access-cskz8\") pod \"nova-api-0\" (UID: \"973019ce-1e95-4284-b2da-bd24519c84d0\") " pod="openstack/nova-api-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.456998 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.457683 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/973019ce-1e95-4284-b2da-bd24519c84d0-logs\") pod \"nova-api-0\" (UID: \"973019ce-1e95-4284-b2da-bd24519c84d0\") " pod="openstack/nova-api-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.465488 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/973019ce-1e95-4284-b2da-bd24519c84d0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"973019ce-1e95-4284-b2da-bd24519c84d0\") " pod="openstack/nova-api-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.472269 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/973019ce-1e95-4284-b2da-bd24519c84d0-config-data\") pod \"nova-api-0\" (UID: \"973019ce-1e95-4284-b2da-bd24519c84d0\") " pod="openstack/nova-api-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.496770 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cskz8\" (UniqueName: \"kubernetes.io/projected/973019ce-1e95-4284-b2da-bd24519c84d0-kube-api-access-cskz8\") pod \"nova-api-0\" (UID: \"973019ce-1e95-4284-b2da-bd24519c84d0\") " pod="openstack/nova-api-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.508740 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.549288 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.551371 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.555002 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.558206 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0c356ea-be6e-48b7-99b8-cdb4fdb13466-logs\") pod \"nova-metadata-0\" (UID: \"e0c356ea-be6e-48b7-99b8-cdb4fdb13466\") " pod="openstack/nova-metadata-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.558269 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmpkh\" (UniqueName: \"kubernetes.io/projected/afc1dd4e-6ce9-45f3-8beb-37138f38a4df-kube-api-access-tmpkh\") pod \"nova-scheduler-0\" (UID: \"afc1dd4e-6ce9-45f3-8beb-37138f38a4df\") " pod="openstack/nova-scheduler-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.558294 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c356ea-be6e-48b7-99b8-cdb4fdb13466-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e0c356ea-be6e-48b7-99b8-cdb4fdb13466\") " pod="openstack/nova-metadata-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.558366 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afc1dd4e-6ce9-45f3-8beb-37138f38a4df-config-data\") pod \"nova-scheduler-0\" (UID: \"afc1dd4e-6ce9-45f3-8beb-37138f38a4df\") " pod="openstack/nova-scheduler-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.558427 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c356ea-be6e-48b7-99b8-cdb4fdb13466-config-data\") pod \"nova-metadata-0\" (UID: \"e0c356ea-be6e-48b7-99b8-cdb4fdb13466\") " pod="openstack/nova-metadata-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.558485 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl86p\" (UniqueName: \"kubernetes.io/projected/e0c356ea-be6e-48b7-99b8-cdb4fdb13466-kube-api-access-cl86p\") pod \"nova-metadata-0\" (UID: \"e0c356ea-be6e-48b7-99b8-cdb4fdb13466\") " pod="openstack/nova-metadata-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.558507 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afc1dd4e-6ce9-45f3-8beb-37138f38a4df-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"afc1dd4e-6ce9-45f3-8beb-37138f38a4df\") " pod="openstack/nova-scheduler-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.579428 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.630030 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.661704 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl86p\" (UniqueName: \"kubernetes.io/projected/e0c356ea-be6e-48b7-99b8-cdb4fdb13466-kube-api-access-cl86p\") pod \"nova-metadata-0\" (UID: \"e0c356ea-be6e-48b7-99b8-cdb4fdb13466\") " pod="openstack/nova-metadata-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.661753 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afc1dd4e-6ce9-45f3-8beb-37138f38a4df-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"afc1dd4e-6ce9-45f3-8beb-37138f38a4df\") " pod="openstack/nova-scheduler-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.661844 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0c356ea-be6e-48b7-99b8-cdb4fdb13466-logs\") pod \"nova-metadata-0\" (UID: \"e0c356ea-be6e-48b7-99b8-cdb4fdb13466\") " pod="openstack/nova-metadata-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.661876 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmpkh\" (UniqueName: \"kubernetes.io/projected/afc1dd4e-6ce9-45f3-8beb-37138f38a4df-kube-api-access-tmpkh\") pod \"nova-scheduler-0\" (UID: \"afc1dd4e-6ce9-45f3-8beb-37138f38a4df\") " pod="openstack/nova-scheduler-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.661894 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c356ea-be6e-48b7-99b8-cdb4fdb13466-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e0c356ea-be6e-48b7-99b8-cdb4fdb13466\") " pod="openstack/nova-metadata-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.661934 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afc1dd4e-6ce9-45f3-8beb-37138f38a4df-config-data\") pod \"nova-scheduler-0\" (UID: \"afc1dd4e-6ce9-45f3-8beb-37138f38a4df\") " pod="openstack/nova-scheduler-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.661983 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c356ea-be6e-48b7-99b8-cdb4fdb13466-config-data\") pod \"nova-metadata-0\" (UID: \"e0c356ea-be6e-48b7-99b8-cdb4fdb13466\") " pod="openstack/nova-metadata-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.663393 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0c356ea-be6e-48b7-99b8-cdb4fdb13466-logs\") pod \"nova-metadata-0\" (UID: \"e0c356ea-be6e-48b7-99b8-cdb4fdb13466\") " pod="openstack/nova-metadata-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.677066 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afc1dd4e-6ce9-45f3-8beb-37138f38a4df-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"afc1dd4e-6ce9-45f3-8beb-37138f38a4df\") " pod="openstack/nova-scheduler-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.689105 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c356ea-be6e-48b7-99b8-cdb4fdb13466-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e0c356ea-be6e-48b7-99b8-cdb4fdb13466\") " pod="openstack/nova-metadata-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.691109 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afc1dd4e-6ce9-45f3-8beb-37138f38a4df-config-data\") pod \"nova-scheduler-0\" (UID: \"afc1dd4e-6ce9-45f3-8beb-37138f38a4df\") " pod="openstack/nova-scheduler-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.700246 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl86p\" (UniqueName: \"kubernetes.io/projected/e0c356ea-be6e-48b7-99b8-cdb4fdb13466-kube-api-access-cl86p\") pod \"nova-metadata-0\" (UID: \"e0c356ea-be6e-48b7-99b8-cdb4fdb13466\") " pod="openstack/nova-metadata-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.711390 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c356ea-be6e-48b7-99b8-cdb4fdb13466-config-data\") pod \"nova-metadata-0\" (UID: \"e0c356ea-be6e-48b7-99b8-cdb4fdb13466\") " pod="openstack/nova-metadata-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.724007 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmpkh\" (UniqueName: \"kubernetes.io/projected/afc1dd4e-6ce9-45f3-8beb-37138f38a4df-kube-api-access-tmpkh\") pod \"nova-scheduler-0\" (UID: \"afc1dd4e-6ce9-45f3-8beb-37138f38a4df\") " pod="openstack/nova-scheduler-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.740435 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d46bc7bf9-mxq8f"] Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.748970 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.769030 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d46bc7bf9-mxq8f"] Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.813834 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.815989 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.818715 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.828731 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.913900 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-ovsdbserver-sb\") pod \"dnsmasq-dns-d46bc7bf9-mxq8f\" (UID: \"d601167c-c425-40f5-ad1d-9cf563033888\") " pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.914698 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-ovsdbserver-nb\") pod \"dnsmasq-dns-d46bc7bf9-mxq8f\" (UID: \"d601167c-c425-40f5-ad1d-9cf563033888\") " pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.914851 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-dns-swift-storage-0\") pod \"dnsmasq-dns-d46bc7bf9-mxq8f\" (UID: \"d601167c-c425-40f5-ad1d-9cf563033888\") " pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.914923 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfmkd\" (UniqueName: \"kubernetes.io/projected/d601167c-c425-40f5-ad1d-9cf563033888-kube-api-access-dfmkd\") pod \"dnsmasq-dns-d46bc7bf9-mxq8f\" (UID: \"d601167c-c425-40f5-ad1d-9cf563033888\") " pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.914987 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-config\") pod \"dnsmasq-dns-d46bc7bf9-mxq8f\" (UID: \"d601167c-c425-40f5-ad1d-9cf563033888\") " pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.915042 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-dns-svc\") pod \"dnsmasq-dns-d46bc7bf9-mxq8f\" (UID: \"d601167c-c425-40f5-ad1d-9cf563033888\") " pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" Dec 06 06:49:31 crc kubenswrapper[4823]: I1206 06:49:31.966461 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:31.999931 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.019623 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-ovsdbserver-nb\") pod \"dnsmasq-dns-d46bc7bf9-mxq8f\" (UID: \"d601167c-c425-40f5-ad1d-9cf563033888\") " pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.020028 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/228fbc4e-4973-4671-a38c-b74baed39e11-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"228fbc4e-4973-4671-a38c-b74baed39e11\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.020083 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-dns-swift-storage-0\") pod \"dnsmasq-dns-d46bc7bf9-mxq8f\" (UID: \"d601167c-c425-40f5-ad1d-9cf563033888\") " pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.020141 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfmkd\" (UniqueName: \"kubernetes.io/projected/d601167c-c425-40f5-ad1d-9cf563033888-kube-api-access-dfmkd\") pod \"dnsmasq-dns-d46bc7bf9-mxq8f\" (UID: \"d601167c-c425-40f5-ad1d-9cf563033888\") " pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.020191 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-config\") pod \"dnsmasq-dns-d46bc7bf9-mxq8f\" (UID: \"d601167c-c425-40f5-ad1d-9cf563033888\") " pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.020236 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-dns-svc\") pod \"dnsmasq-dns-d46bc7bf9-mxq8f\" (UID: \"d601167c-c425-40f5-ad1d-9cf563033888\") " pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.020271 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/228fbc4e-4973-4671-a38c-b74baed39e11-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"228fbc4e-4973-4671-a38c-b74baed39e11\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.020477 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f559q\" (UniqueName: \"kubernetes.io/projected/228fbc4e-4973-4671-a38c-b74baed39e11-kube-api-access-f559q\") pod \"nova-cell1-novncproxy-0\" (UID: \"228fbc4e-4973-4671-a38c-b74baed39e11\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.020532 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-ovsdbserver-sb\") pod \"dnsmasq-dns-d46bc7bf9-mxq8f\" (UID: \"d601167c-c425-40f5-ad1d-9cf563033888\") " pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.021750 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-ovsdbserver-sb\") pod \"dnsmasq-dns-d46bc7bf9-mxq8f\" (UID: \"d601167c-c425-40f5-ad1d-9cf563033888\") " pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.022447 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-ovsdbserver-nb\") pod \"dnsmasq-dns-d46bc7bf9-mxq8f\" (UID: \"d601167c-c425-40f5-ad1d-9cf563033888\") " pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.024565 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-dns-swift-storage-0\") pod \"dnsmasq-dns-d46bc7bf9-mxq8f\" (UID: \"d601167c-c425-40f5-ad1d-9cf563033888\") " pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.026589 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-config\") pod \"dnsmasq-dns-d46bc7bf9-mxq8f\" (UID: \"d601167c-c425-40f5-ad1d-9cf563033888\") " pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.050169 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-dns-svc\") pod \"dnsmasq-dns-d46bc7bf9-mxq8f\" (UID: \"d601167c-c425-40f5-ad1d-9cf563033888\") " pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.054124 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfmkd\" (UniqueName: \"kubernetes.io/projected/d601167c-c425-40f5-ad1d-9cf563033888-kube-api-access-dfmkd\") pod \"dnsmasq-dns-d46bc7bf9-mxq8f\" (UID: \"d601167c-c425-40f5-ad1d-9cf563033888\") " pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.097246 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.125993 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f559q\" (UniqueName: \"kubernetes.io/projected/228fbc4e-4973-4671-a38c-b74baed39e11-kube-api-access-f559q\") pod \"nova-cell1-novncproxy-0\" (UID: \"228fbc4e-4973-4671-a38c-b74baed39e11\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.126118 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/228fbc4e-4973-4671-a38c-b74baed39e11-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"228fbc4e-4973-4671-a38c-b74baed39e11\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.126265 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/228fbc4e-4973-4671-a38c-b74baed39e11-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"228fbc4e-4973-4671-a38c-b74baed39e11\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.132280 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/228fbc4e-4973-4671-a38c-b74baed39e11-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"228fbc4e-4973-4671-a38c-b74baed39e11\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.138486 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/228fbc4e-4973-4671-a38c-b74baed39e11-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"228fbc4e-4973-4671-a38c-b74baed39e11\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.157018 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f559q\" (UniqueName: \"kubernetes.io/projected/228fbc4e-4973-4671-a38c-b74baed39e11-kube-api-access-f559q\") pod \"nova-cell1-novncproxy-0\" (UID: \"228fbc4e-4973-4671-a38c-b74baed39e11\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.187611 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8mx8j"] Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.207521 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.444146 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.463012 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kgd2q"] Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.467486 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kgd2q" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.471056 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.472453 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.475177 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kgd2q"] Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.641428 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ac01ab-ea2b-4b2c-9e2e-dab4612351d5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-kgd2q\" (UID: \"53ac01ab-ea2b-4b2c-9e2e-dab4612351d5\") " pod="openstack/nova-cell1-conductor-db-sync-kgd2q" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.641576 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ac01ab-ea2b-4b2c-9e2e-dab4612351d5-config-data\") pod \"nova-cell1-conductor-db-sync-kgd2q\" (UID: \"53ac01ab-ea2b-4b2c-9e2e-dab4612351d5\") " pod="openstack/nova-cell1-conductor-db-sync-kgd2q" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.641678 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53ac01ab-ea2b-4b2c-9e2e-dab4612351d5-scripts\") pod \"nova-cell1-conductor-db-sync-kgd2q\" (UID: \"53ac01ab-ea2b-4b2c-9e2e-dab4612351d5\") " pod="openstack/nova-cell1-conductor-db-sync-kgd2q" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.641737 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49mgt\" (UniqueName: \"kubernetes.io/projected/53ac01ab-ea2b-4b2c-9e2e-dab4612351d5-kube-api-access-49mgt\") pod \"nova-cell1-conductor-db-sync-kgd2q\" (UID: \"53ac01ab-ea2b-4b2c-9e2e-dab4612351d5\") " pod="openstack/nova-cell1-conductor-db-sync-kgd2q" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.707291 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.744236 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ac01ab-ea2b-4b2c-9e2e-dab4612351d5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-kgd2q\" (UID: \"53ac01ab-ea2b-4b2c-9e2e-dab4612351d5\") " pod="openstack/nova-cell1-conductor-db-sync-kgd2q" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.744303 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ac01ab-ea2b-4b2c-9e2e-dab4612351d5-config-data\") pod \"nova-cell1-conductor-db-sync-kgd2q\" (UID: \"53ac01ab-ea2b-4b2c-9e2e-dab4612351d5\") " pod="openstack/nova-cell1-conductor-db-sync-kgd2q" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.744358 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53ac01ab-ea2b-4b2c-9e2e-dab4612351d5-scripts\") pod \"nova-cell1-conductor-db-sync-kgd2q\" (UID: \"53ac01ab-ea2b-4b2c-9e2e-dab4612351d5\") " pod="openstack/nova-cell1-conductor-db-sync-kgd2q" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.744390 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49mgt\" (UniqueName: \"kubernetes.io/projected/53ac01ab-ea2b-4b2c-9e2e-dab4612351d5-kube-api-access-49mgt\") pod \"nova-cell1-conductor-db-sync-kgd2q\" (UID: \"53ac01ab-ea2b-4b2c-9e2e-dab4612351d5\") " pod="openstack/nova-cell1-conductor-db-sync-kgd2q" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.750395 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ac01ab-ea2b-4b2c-9e2e-dab4612351d5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-kgd2q\" (UID: \"53ac01ab-ea2b-4b2c-9e2e-dab4612351d5\") " pod="openstack/nova-cell1-conductor-db-sync-kgd2q" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.750514 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ac01ab-ea2b-4b2c-9e2e-dab4612351d5-config-data\") pod \"nova-cell1-conductor-db-sync-kgd2q\" (UID: \"53ac01ab-ea2b-4b2c-9e2e-dab4612351d5\") " pod="openstack/nova-cell1-conductor-db-sync-kgd2q" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.751649 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53ac01ab-ea2b-4b2c-9e2e-dab4612351d5-scripts\") pod \"nova-cell1-conductor-db-sync-kgd2q\" (UID: \"53ac01ab-ea2b-4b2c-9e2e-dab4612351d5\") " pod="openstack/nova-cell1-conductor-db-sync-kgd2q" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.763436 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49mgt\" (UniqueName: \"kubernetes.io/projected/53ac01ab-ea2b-4b2c-9e2e-dab4612351d5-kube-api-access-49mgt\") pod \"nova-cell1-conductor-db-sync-kgd2q\" (UID: \"53ac01ab-ea2b-4b2c-9e2e-dab4612351d5\") " pod="openstack/nova-cell1-conductor-db-sync-kgd2q" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.846676 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kgd2q" Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.939290 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8mx8j" event={"ID":"36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c","Type":"ContainerStarted","Data":"6478bc6ae5f1e5de56ed03bdf39ebf427c09512faceaf0c5a6d2d444c4c1c6ef"} Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.941801 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"973019ce-1e95-4284-b2da-bd24519c84d0","Type":"ContainerStarted","Data":"5c190cb2eb36084da2c8d92d713c40bc5085eebb6966af431272c83748ff6a51"} Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.944674 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"afc1dd4e-6ce9-45f3-8beb-37138f38a4df","Type":"ContainerStarted","Data":"069097556d30b4db6007dfc615ba13c257962c328c191eeef41900b2c132b762"} Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.956757 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 06:49:32 crc kubenswrapper[4823]: I1206 06:49:32.977595 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 06:49:33 crc kubenswrapper[4823]: I1206 06:49:33.241685 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d46bc7bf9-mxq8f"] Dec 06 06:49:33 crc kubenswrapper[4823]: W1206 06:49:33.270019 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd601167c_c425_40f5_ad1d_9cf563033888.slice/crio-d25d253c06a66c08ce32dcdc4b6874a9b5d1e66c76d1acfd13f6c798292dd1ed WatchSource:0}: Error finding container d25d253c06a66c08ce32dcdc4b6874a9b5d1e66c76d1acfd13f6c798292dd1ed: Status 404 returned error can't find the container with id d25d253c06a66c08ce32dcdc4b6874a9b5d1e66c76d1acfd13f6c798292dd1ed Dec 06 06:49:33 crc kubenswrapper[4823]: I1206 06:49:33.453150 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kgd2q"] Dec 06 06:49:33 crc kubenswrapper[4823]: W1206 06:49:33.461844 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53ac01ab_ea2b_4b2c_9e2e_dab4612351d5.slice/crio-1b0b6bb492a8aa84a2d7fadbcb9bb6f231dc07c1e2601552cf420188e3399931 WatchSource:0}: Error finding container 1b0b6bb492a8aa84a2d7fadbcb9bb6f231dc07c1e2601552cf420188e3399931: Status 404 returned error can't find the container with id 1b0b6bb492a8aa84a2d7fadbcb9bb6f231dc07c1e2601552cf420188e3399931 Dec 06 06:49:33 crc kubenswrapper[4823]: I1206 06:49:33.960783 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" event={"ID":"d601167c-c425-40f5-ad1d-9cf563033888","Type":"ContainerStarted","Data":"d25d253c06a66c08ce32dcdc4b6874a9b5d1e66c76d1acfd13f6c798292dd1ed"} Dec 06 06:49:33 crc kubenswrapper[4823]: I1206 06:49:33.962312 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"228fbc4e-4973-4671-a38c-b74baed39e11","Type":"ContainerStarted","Data":"ab66030316de03f14baf7132caa043884a11307aa905397b94b5edde66efeb9b"} Dec 06 06:49:33 crc kubenswrapper[4823]: I1206 06:49:33.965365 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e0c356ea-be6e-48b7-99b8-cdb4fdb13466","Type":"ContainerStarted","Data":"427844862c206939526d919870a1ca90ddff6e108ca8930d12b6590b75b0fc3b"} Dec 06 06:49:33 crc kubenswrapper[4823]: I1206 06:49:33.969800 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8mx8j" event={"ID":"36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c","Type":"ContainerStarted","Data":"71d0841bd99163cec97b53ba0cf403bc9f24472eb24888d603b77b1ebc93c420"} Dec 06 06:49:33 crc kubenswrapper[4823]: I1206 06:49:33.977195 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kgd2q" event={"ID":"53ac01ab-ea2b-4b2c-9e2e-dab4612351d5","Type":"ContainerStarted","Data":"5ebef798af4f48fab5049f43add77c2defb01ff161364f7cc081d96a1d477f59"} Dec 06 06:49:33 crc kubenswrapper[4823]: I1206 06:49:33.977261 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kgd2q" event={"ID":"53ac01ab-ea2b-4b2c-9e2e-dab4612351d5","Type":"ContainerStarted","Data":"1b0b6bb492a8aa84a2d7fadbcb9bb6f231dc07c1e2601552cf420188e3399931"} Dec 06 06:49:33 crc kubenswrapper[4823]: I1206 06:49:33.998619 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-8mx8j" podStartSLOduration=3.998595688 podStartE2EDuration="3.998595688s" podCreationTimestamp="2025-12-06 06:49:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:49:33.99522423 +0000 UTC m=+1475.280976200" watchObservedRunningTime="2025-12-06 06:49:33.998595688 +0000 UTC m=+1475.284347648" Dec 06 06:49:34 crc kubenswrapper[4823]: I1206 06:49:34.992247 4823 generic.go:334] "Generic (PLEG): container finished" podID="d601167c-c425-40f5-ad1d-9cf563033888" containerID="2ad2eda1dcaa3475ce2af7abe256e8608e7c854d8151a7bdc22c8064ffb15ea2" exitCode=0 Dec 06 06:49:34 crc kubenswrapper[4823]: I1206 06:49:34.992363 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" event={"ID":"d601167c-c425-40f5-ad1d-9cf563033888","Type":"ContainerDied","Data":"2ad2eda1dcaa3475ce2af7abe256e8608e7c854d8151a7bdc22c8064ffb15ea2"} Dec 06 06:49:35 crc kubenswrapper[4823]: I1206 06:49:35.036105 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-kgd2q" podStartSLOduration=3.036076052 podStartE2EDuration="3.036076052s" podCreationTimestamp="2025-12-06 06:49:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:49:35.032922351 +0000 UTC m=+1476.318674311" watchObservedRunningTime="2025-12-06 06:49:35.036076052 +0000 UTC m=+1476.321828022" Dec 06 06:49:35 crc kubenswrapper[4823]: I1206 06:49:35.832451 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 06:49:35 crc kubenswrapper[4823]: I1206 06:49:35.849837 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.089272 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" event={"ID":"d601167c-c425-40f5-ad1d-9cf563033888","Type":"ContainerStarted","Data":"116de26f1fcdbdb30a27bac3e27e6a2cddc81a132a2e1fdc19500fd12a751e43"} Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.089806 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.092868 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"228fbc4e-4973-4671-a38c-b74baed39e11","Type":"ContainerStarted","Data":"d470c0f7964719016cf5231eed42ff866f0e4612c444800873c75129acdffe57"} Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.093057 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="228fbc4e-4973-4671-a38c-b74baed39e11" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://d470c0f7964719016cf5231eed42ff866f0e4612c444800873c75129acdffe57" gracePeriod=30 Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.096754 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e0c356ea-be6e-48b7-99b8-cdb4fdb13466","Type":"ContainerStarted","Data":"0cff0386f2d32947653ae4b8b65bbe76276c3450582572d8dc1ec9946a575e14"} Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.097048 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e0c356ea-be6e-48b7-99b8-cdb4fdb13466","Type":"ContainerStarted","Data":"b9f188d7b44f6dde6353d8a084d1728f2693f0f9624bca915cd35115000bb01a"} Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.097279 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e0c356ea-be6e-48b7-99b8-cdb4fdb13466" containerName="nova-metadata-log" containerID="cri-o://b9f188d7b44f6dde6353d8a084d1728f2693f0f9624bca915cd35115000bb01a" gracePeriod=30 Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.097460 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e0c356ea-be6e-48b7-99b8-cdb4fdb13466" containerName="nova-metadata-metadata" containerID="cri-o://0cff0386f2d32947653ae4b8b65bbe76276c3450582572d8dc1ec9946a575e14" gracePeriod=30 Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.102059 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"973019ce-1e95-4284-b2da-bd24519c84d0","Type":"ContainerStarted","Data":"2cd39d0077503ba79d94ab36dc98759dea1da7521c82bd508cb8dd460e15cbd6"} Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.102115 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"973019ce-1e95-4284-b2da-bd24519c84d0","Type":"ContainerStarted","Data":"5d1686dc7c38850be4c4c04a27a2c167c40fafbdf7df3da677eaa287e6a86271"} Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.113925 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"afc1dd4e-6ce9-45f3-8beb-37138f38a4df","Type":"ContainerStarted","Data":"f4224dc747361066457578d1209289c37a51aaf5f36070feefeae5d1920cfe78"} Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.117017 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" podStartSLOduration=10.116990003 podStartE2EDuration="10.116990003s" podCreationTimestamp="2025-12-06 06:49:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:49:41.114336376 +0000 UTC m=+1482.400088346" watchObservedRunningTime="2025-12-06 06:49:41.116990003 +0000 UTC m=+1482.402741963" Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.144848 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.559205924 podStartE2EDuration="10.144820057s" podCreationTimestamp="2025-12-06 06:49:31 +0000 UTC" firstStartedPulling="2025-12-06 06:49:32.472644292 +0000 UTC m=+1473.758396252" lastFinishedPulling="2025-12-06 06:49:40.058258425 +0000 UTC m=+1481.344010385" observedRunningTime="2025-12-06 06:49:41.137675751 +0000 UTC m=+1482.423427711" watchObservedRunningTime="2025-12-06 06:49:41.144820057 +0000 UTC m=+1482.430572037" Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.185256 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.123920508 podStartE2EDuration="10.185228424s" podCreationTimestamp="2025-12-06 06:49:31 +0000 UTC" firstStartedPulling="2025-12-06 06:49:33.030119778 +0000 UTC m=+1474.315871738" lastFinishedPulling="2025-12-06 06:49:40.091427694 +0000 UTC m=+1481.377179654" observedRunningTime="2025-12-06 06:49:41.168436039 +0000 UTC m=+1482.454187999" watchObservedRunningTime="2025-12-06 06:49:41.185228424 +0000 UTC m=+1482.470980394" Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.210076 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.825089506 podStartE2EDuration="10.210047772s" podCreationTimestamp="2025-12-06 06:49:31 +0000 UTC" firstStartedPulling="2025-12-06 06:49:32.712960365 +0000 UTC m=+1473.998712325" lastFinishedPulling="2025-12-06 06:49:40.097918631 +0000 UTC m=+1481.383670591" observedRunningTime="2025-12-06 06:49:41.189201829 +0000 UTC m=+1482.474953789" watchObservedRunningTime="2025-12-06 06:49:41.210047772 +0000 UTC m=+1482.495799732" Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.238129 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.188831654 podStartE2EDuration="10.238098472s" podCreationTimestamp="2025-12-06 06:49:31 +0000 UTC" firstStartedPulling="2025-12-06 06:49:33.011640264 +0000 UTC m=+1474.297392234" lastFinishedPulling="2025-12-06 06:49:40.060907092 +0000 UTC m=+1481.346659052" observedRunningTime="2025-12-06 06:49:41.225837858 +0000 UTC m=+1482.511589828" watchObservedRunningTime="2025-12-06 06:49:41.238098472 +0000 UTC m=+1482.523850432" Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.631927 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.631976 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.722751 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.813483 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl86p\" (UniqueName: \"kubernetes.io/projected/e0c356ea-be6e-48b7-99b8-cdb4fdb13466-kube-api-access-cl86p\") pod \"e0c356ea-be6e-48b7-99b8-cdb4fdb13466\" (UID: \"e0c356ea-be6e-48b7-99b8-cdb4fdb13466\") " Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.813802 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c356ea-be6e-48b7-99b8-cdb4fdb13466-config-data\") pod \"e0c356ea-be6e-48b7-99b8-cdb4fdb13466\" (UID: \"e0c356ea-be6e-48b7-99b8-cdb4fdb13466\") " Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.813841 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c356ea-be6e-48b7-99b8-cdb4fdb13466-combined-ca-bundle\") pod \"e0c356ea-be6e-48b7-99b8-cdb4fdb13466\" (UID: \"e0c356ea-be6e-48b7-99b8-cdb4fdb13466\") " Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.813917 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0c356ea-be6e-48b7-99b8-cdb4fdb13466-logs\") pod \"e0c356ea-be6e-48b7-99b8-cdb4fdb13466\" (UID: \"e0c356ea-be6e-48b7-99b8-cdb4fdb13466\") " Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.819304 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0c356ea-be6e-48b7-99b8-cdb4fdb13466-logs" (OuterVolumeSpecName: "logs") pod "e0c356ea-be6e-48b7-99b8-cdb4fdb13466" (UID: "e0c356ea-be6e-48b7-99b8-cdb4fdb13466"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.836085 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0c356ea-be6e-48b7-99b8-cdb4fdb13466-kube-api-access-cl86p" (OuterVolumeSpecName: "kube-api-access-cl86p") pod "e0c356ea-be6e-48b7-99b8-cdb4fdb13466" (UID: "e0c356ea-be6e-48b7-99b8-cdb4fdb13466"). InnerVolumeSpecName "kube-api-access-cl86p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.862991 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c356ea-be6e-48b7-99b8-cdb4fdb13466-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0c356ea-be6e-48b7-99b8-cdb4fdb13466" (UID: "e0c356ea-be6e-48b7-99b8-cdb4fdb13466"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.866132 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c356ea-be6e-48b7-99b8-cdb4fdb13466-config-data" (OuterVolumeSpecName: "config-data") pod "e0c356ea-be6e-48b7-99b8-cdb4fdb13466" (UID: "e0c356ea-be6e-48b7-99b8-cdb4fdb13466"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.921092 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl86p\" (UniqueName: \"kubernetes.io/projected/e0c356ea-be6e-48b7-99b8-cdb4fdb13466-kube-api-access-cl86p\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.921139 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c356ea-be6e-48b7-99b8-cdb4fdb13466-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.921156 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c356ea-be6e-48b7-99b8-cdb4fdb13466-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.921168 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0c356ea-be6e-48b7-99b8-cdb4fdb13466-logs\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.967983 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 06 06:49:41 crc kubenswrapper[4823]: I1206 06:49:41.968039 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.017119 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.127869 4823 generic.go:334] "Generic (PLEG): container finished" podID="e0c356ea-be6e-48b7-99b8-cdb4fdb13466" containerID="0cff0386f2d32947653ae4b8b65bbe76276c3450582572d8dc1ec9946a575e14" exitCode=0 Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.127918 4823 generic.go:334] "Generic (PLEG): container finished" podID="e0c356ea-be6e-48b7-99b8-cdb4fdb13466" containerID="b9f188d7b44f6dde6353d8a084d1728f2693f0f9624bca915cd35115000bb01a" exitCode=143 Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.127929 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.128071 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e0c356ea-be6e-48b7-99b8-cdb4fdb13466","Type":"ContainerDied","Data":"0cff0386f2d32947653ae4b8b65bbe76276c3450582572d8dc1ec9946a575e14"} Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.130916 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e0c356ea-be6e-48b7-99b8-cdb4fdb13466","Type":"ContainerDied","Data":"b9f188d7b44f6dde6353d8a084d1728f2693f0f9624bca915cd35115000bb01a"} Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.130930 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e0c356ea-be6e-48b7-99b8-cdb4fdb13466","Type":"ContainerDied","Data":"427844862c206939526d919870a1ca90ddff6e108ca8930d12b6590b75b0fc3b"} Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.130951 4823 scope.go:117] "RemoveContainer" containerID="0cff0386f2d32947653ae4b8b65bbe76276c3450582572d8dc1ec9946a575e14" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.196589 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.198726 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.199109 4823 scope.go:117] "RemoveContainer" containerID="b9f188d7b44f6dde6353d8a084d1728f2693f0f9624bca915cd35115000bb01a" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.209576 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.250706 4823 scope.go:117] "RemoveContainer" containerID="0cff0386f2d32947653ae4b8b65bbe76276c3450582572d8dc1ec9946a575e14" Dec 06 06:49:42 crc kubenswrapper[4823]: E1206 06:49:42.257243 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cff0386f2d32947653ae4b8b65bbe76276c3450582572d8dc1ec9946a575e14\": container with ID starting with 0cff0386f2d32947653ae4b8b65bbe76276c3450582572d8dc1ec9946a575e14 not found: ID does not exist" containerID="0cff0386f2d32947653ae4b8b65bbe76276c3450582572d8dc1ec9946a575e14" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.257391 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cff0386f2d32947653ae4b8b65bbe76276c3450582572d8dc1ec9946a575e14"} err="failed to get container status \"0cff0386f2d32947653ae4b8b65bbe76276c3450582572d8dc1ec9946a575e14\": rpc error: code = NotFound desc = could not find container \"0cff0386f2d32947653ae4b8b65bbe76276c3450582572d8dc1ec9946a575e14\": container with ID starting with 0cff0386f2d32947653ae4b8b65bbe76276c3450582572d8dc1ec9946a575e14 not found: ID does not exist" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.257428 4823 scope.go:117] "RemoveContainer" containerID="b9f188d7b44f6dde6353d8a084d1728f2693f0f9624bca915cd35115000bb01a" Dec 06 06:49:42 crc kubenswrapper[4823]: E1206 06:49:42.258092 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9f188d7b44f6dde6353d8a084d1728f2693f0f9624bca915cd35115000bb01a\": container with ID starting with b9f188d7b44f6dde6353d8a084d1728f2693f0f9624bca915cd35115000bb01a not found: ID does not exist" containerID="b9f188d7b44f6dde6353d8a084d1728f2693f0f9624bca915cd35115000bb01a" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.258113 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9f188d7b44f6dde6353d8a084d1728f2693f0f9624bca915cd35115000bb01a"} err="failed to get container status \"b9f188d7b44f6dde6353d8a084d1728f2693f0f9624bca915cd35115000bb01a\": rpc error: code = NotFound desc = could not find container \"b9f188d7b44f6dde6353d8a084d1728f2693f0f9624bca915cd35115000bb01a\": container with ID starting with b9f188d7b44f6dde6353d8a084d1728f2693f0f9624bca915cd35115000bb01a not found: ID does not exist" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.258130 4823 scope.go:117] "RemoveContainer" containerID="0cff0386f2d32947653ae4b8b65bbe76276c3450582572d8dc1ec9946a575e14" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.259219 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cff0386f2d32947653ae4b8b65bbe76276c3450582572d8dc1ec9946a575e14"} err="failed to get container status \"0cff0386f2d32947653ae4b8b65bbe76276c3450582572d8dc1ec9946a575e14\": rpc error: code = NotFound desc = could not find container \"0cff0386f2d32947653ae4b8b65bbe76276c3450582572d8dc1ec9946a575e14\": container with ID starting with 0cff0386f2d32947653ae4b8b65bbe76276c3450582572d8dc1ec9946a575e14 not found: ID does not exist" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.259300 4823 scope.go:117] "RemoveContainer" containerID="b9f188d7b44f6dde6353d8a084d1728f2693f0f9624bca915cd35115000bb01a" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.259782 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9f188d7b44f6dde6353d8a084d1728f2693f0f9624bca915cd35115000bb01a"} err="failed to get container status \"b9f188d7b44f6dde6353d8a084d1728f2693f0f9624bca915cd35115000bb01a\": rpc error: code = NotFound desc = could not find container \"b9f188d7b44f6dde6353d8a084d1728f2693f0f9624bca915cd35115000bb01a\": container with ID starting with b9f188d7b44f6dde6353d8a084d1728f2693f0f9624bca915cd35115000bb01a not found: ID does not exist" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.263825 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.279382 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 06 06:49:42 crc kubenswrapper[4823]: E1206 06:49:42.279947 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c356ea-be6e-48b7-99b8-cdb4fdb13466" containerName="nova-metadata-log" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.279973 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c356ea-be6e-48b7-99b8-cdb4fdb13466" containerName="nova-metadata-log" Dec 06 06:49:42 crc kubenswrapper[4823]: E1206 06:49:42.280011 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c356ea-be6e-48b7-99b8-cdb4fdb13466" containerName="nova-metadata-metadata" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.280020 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c356ea-be6e-48b7-99b8-cdb4fdb13466" containerName="nova-metadata-metadata" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.280329 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0c356ea-be6e-48b7-99b8-cdb4fdb13466" containerName="nova-metadata-log" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.280357 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0c356ea-be6e-48b7-99b8-cdb4fdb13466" containerName="nova-metadata-metadata" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.281840 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.293121 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.293448 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.295996 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.435973 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/903bd44c-b50e-426d-a9c5-38923f26f220-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"903bd44c-b50e-426d-a9c5-38923f26f220\") " pod="openstack/nova-metadata-0" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.436096 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4kcg\" (UniqueName: \"kubernetes.io/projected/903bd44c-b50e-426d-a9c5-38923f26f220-kube-api-access-x4kcg\") pod \"nova-metadata-0\" (UID: \"903bd44c-b50e-426d-a9c5-38923f26f220\") " pod="openstack/nova-metadata-0" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.436449 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/903bd44c-b50e-426d-a9c5-38923f26f220-config-data\") pod \"nova-metadata-0\" (UID: \"903bd44c-b50e-426d-a9c5-38923f26f220\") " pod="openstack/nova-metadata-0" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.436588 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/903bd44c-b50e-426d-a9c5-38923f26f220-logs\") pod \"nova-metadata-0\" (UID: \"903bd44c-b50e-426d-a9c5-38923f26f220\") " pod="openstack/nova-metadata-0" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.436764 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/903bd44c-b50e-426d-a9c5-38923f26f220-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"903bd44c-b50e-426d-a9c5-38923f26f220\") " pod="openstack/nova-metadata-0" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.539412 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/903bd44c-b50e-426d-a9c5-38923f26f220-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"903bd44c-b50e-426d-a9c5-38923f26f220\") " pod="openstack/nova-metadata-0" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.539565 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4kcg\" (UniqueName: \"kubernetes.io/projected/903bd44c-b50e-426d-a9c5-38923f26f220-kube-api-access-x4kcg\") pod \"nova-metadata-0\" (UID: \"903bd44c-b50e-426d-a9c5-38923f26f220\") " pod="openstack/nova-metadata-0" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.539677 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/903bd44c-b50e-426d-a9c5-38923f26f220-config-data\") pod \"nova-metadata-0\" (UID: \"903bd44c-b50e-426d-a9c5-38923f26f220\") " pod="openstack/nova-metadata-0" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.539755 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/903bd44c-b50e-426d-a9c5-38923f26f220-logs\") pod \"nova-metadata-0\" (UID: \"903bd44c-b50e-426d-a9c5-38923f26f220\") " pod="openstack/nova-metadata-0" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.539805 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/903bd44c-b50e-426d-a9c5-38923f26f220-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"903bd44c-b50e-426d-a9c5-38923f26f220\") " pod="openstack/nova-metadata-0" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.540291 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/903bd44c-b50e-426d-a9c5-38923f26f220-logs\") pod \"nova-metadata-0\" (UID: \"903bd44c-b50e-426d-a9c5-38923f26f220\") " pod="openstack/nova-metadata-0" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.543702 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/903bd44c-b50e-426d-a9c5-38923f26f220-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"903bd44c-b50e-426d-a9c5-38923f26f220\") " pod="openstack/nova-metadata-0" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.544622 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/903bd44c-b50e-426d-a9c5-38923f26f220-config-data\") pod \"nova-metadata-0\" (UID: \"903bd44c-b50e-426d-a9c5-38923f26f220\") " pod="openstack/nova-metadata-0" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.545288 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/903bd44c-b50e-426d-a9c5-38923f26f220-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"903bd44c-b50e-426d-a9c5-38923f26f220\") " pod="openstack/nova-metadata-0" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.566723 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4kcg\" (UniqueName: \"kubernetes.io/projected/903bd44c-b50e-426d-a9c5-38923f26f220-kube-api-access-x4kcg\") pod \"nova-metadata-0\" (UID: \"903bd44c-b50e-426d-a9c5-38923f26f220\") " pod="openstack/nova-metadata-0" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.614432 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.717040 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="973019ce-1e95-4284-b2da-bd24519c84d0" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.203:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 06:49:42 crc kubenswrapper[4823]: I1206 06:49:42.717079 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="973019ce-1e95-4284-b2da-bd24519c84d0" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.203:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 06:49:43 crc kubenswrapper[4823]: I1206 06:49:43.157948 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0c356ea-be6e-48b7-99b8-cdb4fdb13466" path="/var/lib/kubelet/pods/e0c356ea-be6e-48b7-99b8-cdb4fdb13466/volumes" Dec 06 06:49:43 crc kubenswrapper[4823]: I1206 06:49:43.353466 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 06:49:44 crc kubenswrapper[4823]: I1206 06:49:44.165081 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"903bd44c-b50e-426d-a9c5-38923f26f220","Type":"ContainerStarted","Data":"a0a5b896249cfe6c72ee54befa6070cd8a928770b19662eecd6fe2f5796ad74d"} Dec 06 06:49:44 crc kubenswrapper[4823]: I1206 06:49:44.165133 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"903bd44c-b50e-426d-a9c5-38923f26f220","Type":"ContainerStarted","Data":"31979db507c604df57e226dede7d5f2ebff8451db5f1c5e324e7b7a33a7a8385"} Dec 06 06:49:44 crc kubenswrapper[4823]: I1206 06:49:44.165145 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"903bd44c-b50e-426d-a9c5-38923f26f220","Type":"ContainerStarted","Data":"f9ade410914d1bd55f6cc81ed679b935bd4d339f034f8dd241491fa9649c5cf3"} Dec 06 06:49:44 crc kubenswrapper[4823]: I1206 06:49:44.169247 4823 generic.go:334] "Generic (PLEG): container finished" podID="36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c" containerID="71d0841bd99163cec97b53ba0cf403bc9f24472eb24888d603b77b1ebc93c420" exitCode=0 Dec 06 06:49:44 crc kubenswrapper[4823]: I1206 06:49:44.169288 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8mx8j" event={"ID":"36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c","Type":"ContainerDied","Data":"71d0841bd99163cec97b53ba0cf403bc9f24472eb24888d603b77b1ebc93c420"} Dec 06 06:49:44 crc kubenswrapper[4823]: I1206 06:49:44.189713 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.189689195 podStartE2EDuration="2.189689195s" podCreationTimestamp="2025-12-06 06:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:49:44.188293195 +0000 UTC m=+1485.474045165" watchObservedRunningTime="2025-12-06 06:49:44.189689195 +0000 UTC m=+1485.475441165" Dec 06 06:49:45 crc kubenswrapper[4823]: I1206 06:49:45.742770 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8mx8j" Dec 06 06:49:45 crc kubenswrapper[4823]: I1206 06:49:45.920928 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zghcj\" (UniqueName: \"kubernetes.io/projected/36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c-kube-api-access-zghcj\") pod \"36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c\" (UID: \"36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c\") " Dec 06 06:49:45 crc kubenswrapper[4823]: I1206 06:49:45.921032 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c-combined-ca-bundle\") pod \"36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c\" (UID: \"36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c\") " Dec 06 06:49:45 crc kubenswrapper[4823]: I1206 06:49:45.921086 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c-config-data\") pod \"36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c\" (UID: \"36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c\") " Dec 06 06:49:45 crc kubenswrapper[4823]: I1206 06:49:45.921114 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c-scripts\") pod \"36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c\" (UID: \"36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c\") " Dec 06 06:49:45 crc kubenswrapper[4823]: I1206 06:49:45.927849 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c-kube-api-access-zghcj" (OuterVolumeSpecName: "kube-api-access-zghcj") pod "36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c" (UID: "36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c"). InnerVolumeSpecName "kube-api-access-zghcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:49:45 crc kubenswrapper[4823]: I1206 06:49:45.928686 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c-scripts" (OuterVolumeSpecName: "scripts") pod "36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c" (UID: "36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:45 crc kubenswrapper[4823]: I1206 06:49:45.966214 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c" (UID: "36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:45 crc kubenswrapper[4823]: I1206 06:49:45.968724 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c-config-data" (OuterVolumeSpecName: "config-data") pod "36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c" (UID: "36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:46 crc kubenswrapper[4823]: I1206 06:49:46.024513 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zghcj\" (UniqueName: \"kubernetes.io/projected/36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c-kube-api-access-zghcj\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:46 crc kubenswrapper[4823]: I1206 06:49:46.024558 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:46 crc kubenswrapper[4823]: I1206 06:49:46.024568 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:46 crc kubenswrapper[4823]: I1206 06:49:46.024576 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:46 crc kubenswrapper[4823]: I1206 06:49:46.193783 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8mx8j" event={"ID":"36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c","Type":"ContainerDied","Data":"6478bc6ae5f1e5de56ed03bdf39ebf427c09512faceaf0c5a6d2d444c4c1c6ef"} Dec 06 06:49:46 crc kubenswrapper[4823]: I1206 06:49:46.193838 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6478bc6ae5f1e5de56ed03bdf39ebf427c09512faceaf0c5a6d2d444c4c1c6ef" Dec 06 06:49:46 crc kubenswrapper[4823]: I1206 06:49:46.193926 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8mx8j" Dec 06 06:49:46 crc kubenswrapper[4823]: I1206 06:49:46.308938 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 06:49:46 crc kubenswrapper[4823]: I1206 06:49:46.309237 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="973019ce-1e95-4284-b2da-bd24519c84d0" containerName="nova-api-log" containerID="cri-o://5d1686dc7c38850be4c4c04a27a2c167c40fafbdf7df3da677eaa287e6a86271" gracePeriod=30 Dec 06 06:49:46 crc kubenswrapper[4823]: I1206 06:49:46.309980 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="973019ce-1e95-4284-b2da-bd24519c84d0" containerName="nova-api-api" containerID="cri-o://2cd39d0077503ba79d94ab36dc98759dea1da7521c82bd508cb8dd460e15cbd6" gracePeriod=30 Dec 06 06:49:46 crc kubenswrapper[4823]: I1206 06:49:46.360253 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 06:49:46 crc kubenswrapper[4823]: I1206 06:49:46.360563 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="afc1dd4e-6ce9-45f3-8beb-37138f38a4df" containerName="nova-scheduler-scheduler" containerID="cri-o://f4224dc747361066457578d1209289c37a51aaf5f36070feefeae5d1920cfe78" gracePeriod=30 Dec 06 06:49:46 crc kubenswrapper[4823]: I1206 06:49:46.381101 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 06:49:46 crc kubenswrapper[4823]: I1206 06:49:46.381491 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="903bd44c-b50e-426d-a9c5-38923f26f220" containerName="nova-metadata-log" containerID="cri-o://31979db507c604df57e226dede7d5f2ebff8451db5f1c5e324e7b7a33a7a8385" gracePeriod=30 Dec 06 06:49:46 crc kubenswrapper[4823]: I1206 06:49:46.381591 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="903bd44c-b50e-426d-a9c5-38923f26f220" containerName="nova-metadata-metadata" containerID="cri-o://a0a5b896249cfe6c72ee54befa6070cd8a928770b19662eecd6fe2f5796ad74d" gracePeriod=30 Dec 06 06:49:46 crc kubenswrapper[4823]: E1206 06:49:46.482641 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36b6b1c1_b316_4c9e_b1cb_cb1bd622c64c.slice/crio-6478bc6ae5f1e5de56ed03bdf39ebf427c09512faceaf0c5a6d2d444c4c1c6ef\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36b6b1c1_b316_4c9e_b1cb_cb1bd622c64c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod903bd44c_b50e_426d_a9c5_38923f26f220.slice/crio-conmon-31979db507c604df57e226dede7d5f2ebff8451db5f1c5e324e7b7a33a7a8385.scope\": RecentStats: unable to find data in memory cache]" Dec 06 06:49:46 crc kubenswrapper[4823]: E1206 06:49:46.969642 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f4224dc747361066457578d1209289c37a51aaf5f36070feefeae5d1920cfe78" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 06:49:46 crc kubenswrapper[4823]: E1206 06:49:46.972361 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f4224dc747361066457578d1209289c37a51aaf5f36070feefeae5d1920cfe78" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 06:49:46 crc kubenswrapper[4823]: E1206 06:49:46.991087 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f4224dc747361066457578d1209289c37a51aaf5f36070feefeae5d1920cfe78" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 06:49:46 crc kubenswrapper[4823]: E1206 06:49:46.991187 4823 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="afc1dd4e-6ce9-45f3-8beb-37138f38a4df" containerName="nova-scheduler-scheduler" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.098804 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.098953 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.213811 4823 generic.go:334] "Generic (PLEG): container finished" podID="973019ce-1e95-4284-b2da-bd24519c84d0" containerID="5d1686dc7c38850be4c4c04a27a2c167c40fafbdf7df3da677eaa287e6a86271" exitCode=143 Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.213950 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"973019ce-1e95-4284-b2da-bd24519c84d0","Type":"ContainerDied","Data":"5d1686dc7c38850be4c4c04a27a2c167c40fafbdf7df3da677eaa287e6a86271"} Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.229466 4823 generic.go:334] "Generic (PLEG): container finished" podID="903bd44c-b50e-426d-a9c5-38923f26f220" containerID="a0a5b896249cfe6c72ee54befa6070cd8a928770b19662eecd6fe2f5796ad74d" exitCode=0 Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.229511 4823 generic.go:334] "Generic (PLEG): container finished" podID="903bd44c-b50e-426d-a9c5-38923f26f220" containerID="31979db507c604df57e226dede7d5f2ebff8451db5f1c5e324e7b7a33a7a8385" exitCode=143 Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.229540 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"903bd44c-b50e-426d-a9c5-38923f26f220","Type":"ContainerDied","Data":"a0a5b896249cfe6c72ee54befa6070cd8a928770b19662eecd6fe2f5796ad74d"} Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.229577 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"903bd44c-b50e-426d-a9c5-38923f26f220","Type":"ContainerDied","Data":"31979db507c604df57e226dede7d5f2ebff8451db5f1c5e324e7b7a33a7a8385"} Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.229592 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"903bd44c-b50e-426d-a9c5-38923f26f220","Type":"ContainerDied","Data":"f9ade410914d1bd55f6cc81ed679b935bd4d339f034f8dd241491fa9649c5cf3"} Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.229616 4823 scope.go:117] "RemoveContainer" containerID="a0a5b896249cfe6c72ee54befa6070cd8a928770b19662eecd6fe2f5796ad74d" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.229824 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.230429 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b97456bf-s7qh8"] Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.230758 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" podUID="21e6e5f3-ac1d-48b6-871a-8a8d52cee775" containerName="dnsmasq-dns" containerID="cri-o://4cb29bddf3ec97f64ac08a9e867469482fc248c696b890468eafdb5f00dacc87" gracePeriod=10 Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.257719 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4kcg\" (UniqueName: \"kubernetes.io/projected/903bd44c-b50e-426d-a9c5-38923f26f220-kube-api-access-x4kcg\") pod \"903bd44c-b50e-426d-a9c5-38923f26f220\" (UID: \"903bd44c-b50e-426d-a9c5-38923f26f220\") " Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.258078 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/903bd44c-b50e-426d-a9c5-38923f26f220-nova-metadata-tls-certs\") pod \"903bd44c-b50e-426d-a9c5-38923f26f220\" (UID: \"903bd44c-b50e-426d-a9c5-38923f26f220\") " Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.258209 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/903bd44c-b50e-426d-a9c5-38923f26f220-combined-ca-bundle\") pod \"903bd44c-b50e-426d-a9c5-38923f26f220\" (UID: \"903bd44c-b50e-426d-a9c5-38923f26f220\") " Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.258246 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/903bd44c-b50e-426d-a9c5-38923f26f220-config-data\") pod \"903bd44c-b50e-426d-a9c5-38923f26f220\" (UID: \"903bd44c-b50e-426d-a9c5-38923f26f220\") " Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.258369 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/903bd44c-b50e-426d-a9c5-38923f26f220-logs\") pod \"903bd44c-b50e-426d-a9c5-38923f26f220\" (UID: \"903bd44c-b50e-426d-a9c5-38923f26f220\") " Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.260753 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/903bd44c-b50e-426d-a9c5-38923f26f220-logs" (OuterVolumeSpecName: "logs") pod "903bd44c-b50e-426d-a9c5-38923f26f220" (UID: "903bd44c-b50e-426d-a9c5-38923f26f220"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.261331 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/903bd44c-b50e-426d-a9c5-38923f26f220-logs\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.267012 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/903bd44c-b50e-426d-a9c5-38923f26f220-kube-api-access-x4kcg" (OuterVolumeSpecName: "kube-api-access-x4kcg") pod "903bd44c-b50e-426d-a9c5-38923f26f220" (UID: "903bd44c-b50e-426d-a9c5-38923f26f220"). InnerVolumeSpecName "kube-api-access-x4kcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.346461 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/903bd44c-b50e-426d-a9c5-38923f26f220-config-data" (OuterVolumeSpecName: "config-data") pod "903bd44c-b50e-426d-a9c5-38923f26f220" (UID: "903bd44c-b50e-426d-a9c5-38923f26f220"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.352851 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/903bd44c-b50e-426d-a9c5-38923f26f220-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "903bd44c-b50e-426d-a9c5-38923f26f220" (UID: "903bd44c-b50e-426d-a9c5-38923f26f220"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.360758 4823 scope.go:117] "RemoveContainer" containerID="31979db507c604df57e226dede7d5f2ebff8451db5f1c5e324e7b7a33a7a8385" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.360806 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/903bd44c-b50e-426d-a9c5-38923f26f220-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "903bd44c-b50e-426d-a9c5-38923f26f220" (UID: "903bd44c-b50e-426d-a9c5-38923f26f220"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.367422 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4kcg\" (UniqueName: \"kubernetes.io/projected/903bd44c-b50e-426d-a9c5-38923f26f220-kube-api-access-x4kcg\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.367461 4823 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/903bd44c-b50e-426d-a9c5-38923f26f220-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.367477 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/903bd44c-b50e-426d-a9c5-38923f26f220-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.367489 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/903bd44c-b50e-426d-a9c5-38923f26f220-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.590350 4823 scope.go:117] "RemoveContainer" containerID="a0a5b896249cfe6c72ee54befa6070cd8a928770b19662eecd6fe2f5796ad74d" Dec 06 06:49:47 crc kubenswrapper[4823]: E1206 06:49:47.593305 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0a5b896249cfe6c72ee54befa6070cd8a928770b19662eecd6fe2f5796ad74d\": container with ID starting with a0a5b896249cfe6c72ee54befa6070cd8a928770b19662eecd6fe2f5796ad74d not found: ID does not exist" containerID="a0a5b896249cfe6c72ee54befa6070cd8a928770b19662eecd6fe2f5796ad74d" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.593361 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0a5b896249cfe6c72ee54befa6070cd8a928770b19662eecd6fe2f5796ad74d"} err="failed to get container status \"a0a5b896249cfe6c72ee54befa6070cd8a928770b19662eecd6fe2f5796ad74d\": rpc error: code = NotFound desc = could not find container \"a0a5b896249cfe6c72ee54befa6070cd8a928770b19662eecd6fe2f5796ad74d\": container with ID starting with a0a5b896249cfe6c72ee54befa6070cd8a928770b19662eecd6fe2f5796ad74d not found: ID does not exist" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.593397 4823 scope.go:117] "RemoveContainer" containerID="31979db507c604df57e226dede7d5f2ebff8451db5f1c5e324e7b7a33a7a8385" Dec 06 06:49:47 crc kubenswrapper[4823]: E1206 06:49:47.598701 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31979db507c604df57e226dede7d5f2ebff8451db5f1c5e324e7b7a33a7a8385\": container with ID starting with 31979db507c604df57e226dede7d5f2ebff8451db5f1c5e324e7b7a33a7a8385 not found: ID does not exist" containerID="31979db507c604df57e226dede7d5f2ebff8451db5f1c5e324e7b7a33a7a8385" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.598754 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31979db507c604df57e226dede7d5f2ebff8451db5f1c5e324e7b7a33a7a8385"} err="failed to get container status \"31979db507c604df57e226dede7d5f2ebff8451db5f1c5e324e7b7a33a7a8385\": rpc error: code = NotFound desc = could not find container \"31979db507c604df57e226dede7d5f2ebff8451db5f1c5e324e7b7a33a7a8385\": container with ID starting with 31979db507c604df57e226dede7d5f2ebff8451db5f1c5e324e7b7a33a7a8385 not found: ID does not exist" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.598823 4823 scope.go:117] "RemoveContainer" containerID="a0a5b896249cfe6c72ee54befa6070cd8a928770b19662eecd6fe2f5796ad74d" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.599388 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0a5b896249cfe6c72ee54befa6070cd8a928770b19662eecd6fe2f5796ad74d"} err="failed to get container status \"a0a5b896249cfe6c72ee54befa6070cd8a928770b19662eecd6fe2f5796ad74d\": rpc error: code = NotFound desc = could not find container \"a0a5b896249cfe6c72ee54befa6070cd8a928770b19662eecd6fe2f5796ad74d\": container with ID starting with a0a5b896249cfe6c72ee54befa6070cd8a928770b19662eecd6fe2f5796ad74d not found: ID does not exist" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.599413 4823 scope.go:117] "RemoveContainer" containerID="31979db507c604df57e226dede7d5f2ebff8451db5f1c5e324e7b7a33a7a8385" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.599699 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31979db507c604df57e226dede7d5f2ebff8451db5f1c5e324e7b7a33a7a8385"} err="failed to get container status \"31979db507c604df57e226dede7d5f2ebff8451db5f1c5e324e7b7a33a7a8385\": rpc error: code = NotFound desc = could not find container \"31979db507c604df57e226dede7d5f2ebff8451db5f1c5e324e7b7a33a7a8385\": container with ID starting with 31979db507c604df57e226dede7d5f2ebff8451db5f1c5e324e7b7a33a7a8385 not found: ID does not exist" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.619846 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.706877 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.739506 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 06 06:49:47 crc kubenswrapper[4823]: E1206 06:49:47.740399 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c" containerName="nova-manage" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.740627 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c" containerName="nova-manage" Dec 06 06:49:47 crc kubenswrapper[4823]: E1206 06:49:47.740700 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="903bd44c-b50e-426d-a9c5-38923f26f220" containerName="nova-metadata-log" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.740710 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="903bd44c-b50e-426d-a9c5-38923f26f220" containerName="nova-metadata-log" Dec 06 06:49:47 crc kubenswrapper[4823]: E1206 06:49:47.740724 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="903bd44c-b50e-426d-a9c5-38923f26f220" containerName="nova-metadata-metadata" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.740730 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="903bd44c-b50e-426d-a9c5-38923f26f220" containerName="nova-metadata-metadata" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.741635 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c" containerName="nova-manage" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.741682 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="903bd44c-b50e-426d-a9c5-38923f26f220" containerName="nova-metadata-metadata" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.741696 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="903bd44c-b50e-426d-a9c5-38923f26f220" containerName="nova-metadata-log" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.744363 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.747307 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.747649 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.763265 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.798567 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vrfs\" (UniqueName: \"kubernetes.io/projected/17bcdcc5-1bc7-456d-9574-d6fe00683166-kube-api-access-6vrfs\") pod \"nova-metadata-0\" (UID: \"17bcdcc5-1bc7-456d-9574-d6fe00683166\") " pod="openstack/nova-metadata-0" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.798625 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bcdcc5-1bc7-456d-9574-d6fe00683166-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"17bcdcc5-1bc7-456d-9574-d6fe00683166\") " pod="openstack/nova-metadata-0" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.798656 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/17bcdcc5-1bc7-456d-9574-d6fe00683166-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"17bcdcc5-1bc7-456d-9574-d6fe00683166\") " pod="openstack/nova-metadata-0" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.798791 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17bcdcc5-1bc7-456d-9574-d6fe00683166-config-data\") pod \"nova-metadata-0\" (UID: \"17bcdcc5-1bc7-456d-9574-d6fe00683166\") " pod="openstack/nova-metadata-0" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.799463 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17bcdcc5-1bc7-456d-9574-d6fe00683166-logs\") pod \"nova-metadata-0\" (UID: \"17bcdcc5-1bc7-456d-9574-d6fe00683166\") " pod="openstack/nova-metadata-0" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.902343 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17bcdcc5-1bc7-456d-9574-d6fe00683166-logs\") pod \"nova-metadata-0\" (UID: \"17bcdcc5-1bc7-456d-9574-d6fe00683166\") " pod="openstack/nova-metadata-0" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.902465 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vrfs\" (UniqueName: \"kubernetes.io/projected/17bcdcc5-1bc7-456d-9574-d6fe00683166-kube-api-access-6vrfs\") pod \"nova-metadata-0\" (UID: \"17bcdcc5-1bc7-456d-9574-d6fe00683166\") " pod="openstack/nova-metadata-0" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.902511 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bcdcc5-1bc7-456d-9574-d6fe00683166-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"17bcdcc5-1bc7-456d-9574-d6fe00683166\") " pod="openstack/nova-metadata-0" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.902548 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/17bcdcc5-1bc7-456d-9574-d6fe00683166-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"17bcdcc5-1bc7-456d-9574-d6fe00683166\") " pod="openstack/nova-metadata-0" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.911411 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/17bcdcc5-1bc7-456d-9574-d6fe00683166-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"17bcdcc5-1bc7-456d-9574-d6fe00683166\") " pod="openstack/nova-metadata-0" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.911784 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17bcdcc5-1bc7-456d-9574-d6fe00683166-config-data\") pod \"nova-metadata-0\" (UID: \"17bcdcc5-1bc7-456d-9574-d6fe00683166\") " pod="openstack/nova-metadata-0" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.912203 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17bcdcc5-1bc7-456d-9574-d6fe00683166-logs\") pod \"nova-metadata-0\" (UID: \"17bcdcc5-1bc7-456d-9574-d6fe00683166\") " pod="openstack/nova-metadata-0" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.916385 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bcdcc5-1bc7-456d-9574-d6fe00683166-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"17bcdcc5-1bc7-456d-9574-d6fe00683166\") " pod="openstack/nova-metadata-0" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.916422 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17bcdcc5-1bc7-456d-9574-d6fe00683166-config-data\") pod \"nova-metadata-0\" (UID: \"17bcdcc5-1bc7-456d-9574-d6fe00683166\") " pod="openstack/nova-metadata-0" Dec 06 06:49:47 crc kubenswrapper[4823]: I1206 06:49:47.939650 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vrfs\" (UniqueName: \"kubernetes.io/projected/17bcdcc5-1bc7-456d-9574-d6fe00683166-kube-api-access-6vrfs\") pod \"nova-metadata-0\" (UID: \"17bcdcc5-1bc7-456d-9574-d6fe00683166\") " pod="openstack/nova-metadata-0" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.059912 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.066448 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.103158 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.115304 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-ovsdbserver-nb\") pod \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\" (UID: \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\") " Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.115392 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmpkh\" (UniqueName: \"kubernetes.io/projected/afc1dd4e-6ce9-45f3-8beb-37138f38a4df-kube-api-access-tmpkh\") pod \"afc1dd4e-6ce9-45f3-8beb-37138f38a4df\" (UID: \"afc1dd4e-6ce9-45f3-8beb-37138f38a4df\") " Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.115438 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afc1dd4e-6ce9-45f3-8beb-37138f38a4df-config-data\") pod \"afc1dd4e-6ce9-45f3-8beb-37138f38a4df\" (UID: \"afc1dd4e-6ce9-45f3-8beb-37138f38a4df\") " Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.115483 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g8nt\" (UniqueName: \"kubernetes.io/projected/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-kube-api-access-2g8nt\") pod \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\" (UID: \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\") " Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.115512 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-dns-swift-storage-0\") pod \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\" (UID: \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\") " Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.115551 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-dns-svc\") pod \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\" (UID: \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\") " Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.115696 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-config\") pod \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\" (UID: \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\") " Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.115738 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afc1dd4e-6ce9-45f3-8beb-37138f38a4df-combined-ca-bundle\") pod \"afc1dd4e-6ce9-45f3-8beb-37138f38a4df\" (UID: \"afc1dd4e-6ce9-45f3-8beb-37138f38a4df\") " Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.115811 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-ovsdbserver-sb\") pod \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\" (UID: \"21e6e5f3-ac1d-48b6-871a-8a8d52cee775\") " Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.122645 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afc1dd4e-6ce9-45f3-8beb-37138f38a4df-kube-api-access-tmpkh" (OuterVolumeSpecName: "kube-api-access-tmpkh") pod "afc1dd4e-6ce9-45f3-8beb-37138f38a4df" (UID: "afc1dd4e-6ce9-45f3-8beb-37138f38a4df"). InnerVolumeSpecName "kube-api-access-tmpkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.143931 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-kube-api-access-2g8nt" (OuterVolumeSpecName: "kube-api-access-2g8nt") pod "21e6e5f3-ac1d-48b6-871a-8a8d52cee775" (UID: "21e6e5f3-ac1d-48b6-871a-8a8d52cee775"). InnerVolumeSpecName "kube-api-access-2g8nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.191510 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afc1dd4e-6ce9-45f3-8beb-37138f38a4df-config-data" (OuterVolumeSpecName: "config-data") pod "afc1dd4e-6ce9-45f3-8beb-37138f38a4df" (UID: "afc1dd4e-6ce9-45f3-8beb-37138f38a4df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.209962 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "21e6e5f3-ac1d-48b6-871a-8a8d52cee775" (UID: "21e6e5f3-ac1d-48b6-871a-8a8d52cee775"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.218183 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "21e6e5f3-ac1d-48b6-871a-8a8d52cee775" (UID: "21e6e5f3-ac1d-48b6-871a-8a8d52cee775"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.230863 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmpkh\" (UniqueName: \"kubernetes.io/projected/afc1dd4e-6ce9-45f3-8beb-37138f38a4df-kube-api-access-tmpkh\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.230890 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afc1dd4e-6ce9-45f3-8beb-37138f38a4df-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.230903 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2g8nt\" (UniqueName: \"kubernetes.io/projected/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-kube-api-access-2g8nt\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.230913 4823 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.230924 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.239724 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afc1dd4e-6ce9-45f3-8beb-37138f38a4df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afc1dd4e-6ce9-45f3-8beb-37138f38a4df" (UID: "afc1dd4e-6ce9-45f3-8beb-37138f38a4df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.250244 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "21e6e5f3-ac1d-48b6-871a-8a8d52cee775" (UID: "21e6e5f3-ac1d-48b6-871a-8a8d52cee775"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.299013 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "21e6e5f3-ac1d-48b6-871a-8a8d52cee775" (UID: "21e6e5f3-ac1d-48b6-871a-8a8d52cee775"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.312593 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-config" (OuterVolumeSpecName: "config") pod "21e6e5f3-ac1d-48b6-871a-8a8d52cee775" (UID: "21e6e5f3-ac1d-48b6-871a-8a8d52cee775"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.324561 4823 generic.go:334] "Generic (PLEG): container finished" podID="afc1dd4e-6ce9-45f3-8beb-37138f38a4df" containerID="f4224dc747361066457578d1209289c37a51aaf5f36070feefeae5d1920cfe78" exitCode=0 Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.324676 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"afc1dd4e-6ce9-45f3-8beb-37138f38a4df","Type":"ContainerDied","Data":"f4224dc747361066457578d1209289c37a51aaf5f36070feefeae5d1920cfe78"} Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.324719 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"afc1dd4e-6ce9-45f3-8beb-37138f38a4df","Type":"ContainerDied","Data":"069097556d30b4db6007dfc615ba13c257962c328c191eeef41900b2c132b762"} Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.324742 4823 scope.go:117] "RemoveContainer" containerID="f4224dc747361066457578d1209289c37a51aaf5f36070feefeae5d1920cfe78" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.324923 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.327023 4823 generic.go:334] "Generic (PLEG): container finished" podID="21e6e5f3-ac1d-48b6-871a-8a8d52cee775" containerID="4cb29bddf3ec97f64ac08a9e867469482fc248c696b890468eafdb5f00dacc87" exitCode=0 Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.327056 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" event={"ID":"21e6e5f3-ac1d-48b6-871a-8a8d52cee775","Type":"ContainerDied","Data":"4cb29bddf3ec97f64ac08a9e867469482fc248c696b890468eafdb5f00dacc87"} Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.327078 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" event={"ID":"21e6e5f3-ac1d-48b6-871a-8a8d52cee775","Type":"ContainerDied","Data":"b8a60b1e3bc1985cf46dbb083522a702f63852240c1068cd19b44e8b5d225689"} Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.327106 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.332395 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afc1dd4e-6ce9-45f3-8beb-37138f38a4df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.332442 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.332454 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.332470 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e6e5f3-ac1d-48b6-871a-8a8d52cee775-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.355032 4823 scope.go:117] "RemoveContainer" containerID="f4224dc747361066457578d1209289c37a51aaf5f36070feefeae5d1920cfe78" Dec 06 06:49:48 crc kubenswrapper[4823]: E1206 06:49:48.360020 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4224dc747361066457578d1209289c37a51aaf5f36070feefeae5d1920cfe78\": container with ID starting with f4224dc747361066457578d1209289c37a51aaf5f36070feefeae5d1920cfe78 not found: ID does not exist" containerID="f4224dc747361066457578d1209289c37a51aaf5f36070feefeae5d1920cfe78" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.360106 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4224dc747361066457578d1209289c37a51aaf5f36070feefeae5d1920cfe78"} err="failed to get container status \"f4224dc747361066457578d1209289c37a51aaf5f36070feefeae5d1920cfe78\": rpc error: code = NotFound desc = could not find container \"f4224dc747361066457578d1209289c37a51aaf5f36070feefeae5d1920cfe78\": container with ID starting with f4224dc747361066457578d1209289c37a51aaf5f36070feefeae5d1920cfe78 not found: ID does not exist" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.360152 4823 scope.go:117] "RemoveContainer" containerID="4cb29bddf3ec97f64ac08a9e867469482fc248c696b890468eafdb5f00dacc87" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.381868 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.408514 4823 scope.go:117] "RemoveContainer" containerID="0827169bde32268cacecde80378c5bf96429dd9c43f027b7a54b0454b194fdc5" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.411908 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.427704 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 06:49:48 crc kubenswrapper[4823]: E1206 06:49:48.428371 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e6e5f3-ac1d-48b6-871a-8a8d52cee775" containerName="init" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.428397 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e6e5f3-ac1d-48b6-871a-8a8d52cee775" containerName="init" Dec 06 06:49:48 crc kubenswrapper[4823]: E1206 06:49:48.428408 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afc1dd4e-6ce9-45f3-8beb-37138f38a4df" containerName="nova-scheduler-scheduler" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.428419 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="afc1dd4e-6ce9-45f3-8beb-37138f38a4df" containerName="nova-scheduler-scheduler" Dec 06 06:49:48 crc kubenswrapper[4823]: E1206 06:49:48.428443 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e6e5f3-ac1d-48b6-871a-8a8d52cee775" containerName="dnsmasq-dns" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.428452 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e6e5f3-ac1d-48b6-871a-8a8d52cee775" containerName="dnsmasq-dns" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.428844 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="afc1dd4e-6ce9-45f3-8beb-37138f38a4df" containerName="nova-scheduler-scheduler" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.428885 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e6e5f3-ac1d-48b6-871a-8a8d52cee775" containerName="dnsmasq-dns" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.429897 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.432255 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.443709 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b97456bf-s7qh8"] Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.451063 4823 scope.go:117] "RemoveContainer" containerID="4cb29bddf3ec97f64ac08a9e867469482fc248c696b890468eafdb5f00dacc87" Dec 06 06:49:48 crc kubenswrapper[4823]: E1206 06:49:48.451706 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cb29bddf3ec97f64ac08a9e867469482fc248c696b890468eafdb5f00dacc87\": container with ID starting with 4cb29bddf3ec97f64ac08a9e867469482fc248c696b890468eafdb5f00dacc87 not found: ID does not exist" containerID="4cb29bddf3ec97f64ac08a9e867469482fc248c696b890468eafdb5f00dacc87" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.451763 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cb29bddf3ec97f64ac08a9e867469482fc248c696b890468eafdb5f00dacc87"} err="failed to get container status \"4cb29bddf3ec97f64ac08a9e867469482fc248c696b890468eafdb5f00dacc87\": rpc error: code = NotFound desc = could not find container \"4cb29bddf3ec97f64ac08a9e867469482fc248c696b890468eafdb5f00dacc87\": container with ID starting with 4cb29bddf3ec97f64ac08a9e867469482fc248c696b890468eafdb5f00dacc87 not found: ID does not exist" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.451797 4823 scope.go:117] "RemoveContainer" containerID="0827169bde32268cacecde80378c5bf96429dd9c43f027b7a54b0454b194fdc5" Dec 06 06:49:48 crc kubenswrapper[4823]: E1206 06:49:48.455058 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0827169bde32268cacecde80378c5bf96429dd9c43f027b7a54b0454b194fdc5\": container with ID starting with 0827169bde32268cacecde80378c5bf96429dd9c43f027b7a54b0454b194fdc5 not found: ID does not exist" containerID="0827169bde32268cacecde80378c5bf96429dd9c43f027b7a54b0454b194fdc5" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.455110 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0827169bde32268cacecde80378c5bf96429dd9c43f027b7a54b0454b194fdc5"} err="failed to get container status \"0827169bde32268cacecde80378c5bf96429dd9c43f027b7a54b0454b194fdc5\": rpc error: code = NotFound desc = could not find container \"0827169bde32268cacecde80378c5bf96429dd9c43f027b7a54b0454b194fdc5\": container with ID starting with 0827169bde32268cacecde80378c5bf96429dd9c43f027b7a54b0454b194fdc5 not found: ID does not exist" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.461022 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54b97456bf-s7qh8"] Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.471327 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.537115 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt9kl\" (UniqueName: \"kubernetes.io/projected/ac353c50-086b-4a10-9976-71287895e09f-kube-api-access-zt9kl\") pod \"nova-scheduler-0\" (UID: \"ac353c50-086b-4a10-9976-71287895e09f\") " pod="openstack/nova-scheduler-0" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.537215 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac353c50-086b-4a10-9976-71287895e09f-config-data\") pod \"nova-scheduler-0\" (UID: \"ac353c50-086b-4a10-9976-71287895e09f\") " pod="openstack/nova-scheduler-0" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.537445 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac353c50-086b-4a10-9976-71287895e09f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ac353c50-086b-4a10-9976-71287895e09f\") " pod="openstack/nova-scheduler-0" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.639946 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac353c50-086b-4a10-9976-71287895e09f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ac353c50-086b-4a10-9976-71287895e09f\") " pod="openstack/nova-scheduler-0" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.640322 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt9kl\" (UniqueName: \"kubernetes.io/projected/ac353c50-086b-4a10-9976-71287895e09f-kube-api-access-zt9kl\") pod \"nova-scheduler-0\" (UID: \"ac353c50-086b-4a10-9976-71287895e09f\") " pod="openstack/nova-scheduler-0" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.640567 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac353c50-086b-4a10-9976-71287895e09f-config-data\") pod \"nova-scheduler-0\" (UID: \"ac353c50-086b-4a10-9976-71287895e09f\") " pod="openstack/nova-scheduler-0" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.644970 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac353c50-086b-4a10-9976-71287895e09f-config-data\") pod \"nova-scheduler-0\" (UID: \"ac353c50-086b-4a10-9976-71287895e09f\") " pod="openstack/nova-scheduler-0" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.646367 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac353c50-086b-4a10-9976-71287895e09f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ac353c50-086b-4a10-9976-71287895e09f\") " pod="openstack/nova-scheduler-0" Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.664156 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt9kl\" (UniqueName: \"kubernetes.io/projected/ac353c50-086b-4a10-9976-71287895e09f-kube-api-access-zt9kl\") pod \"nova-scheduler-0\" (UID: \"ac353c50-086b-4a10-9976-71287895e09f\") " pod="openstack/nova-scheduler-0" Dec 06 06:49:48 crc kubenswrapper[4823]: W1206 06:49:48.688365 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17bcdcc5_1bc7_456d_9574_d6fe00683166.slice/crio-03007b8c113f1f915940f4ad4d4c4b634facad504d198412ba40901cb7892578 WatchSource:0}: Error finding container 03007b8c113f1f915940f4ad4d4c4b634facad504d198412ba40901cb7892578: Status 404 returned error can't find the container with id 03007b8c113f1f915940f4ad4d4c4b634facad504d198412ba40901cb7892578 Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.696751 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 06:49:48 crc kubenswrapper[4823]: I1206 06:49:48.752499 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 06:49:49 crc kubenswrapper[4823]: I1206 06:49:49.162290 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21e6e5f3-ac1d-48b6-871a-8a8d52cee775" path="/var/lib/kubelet/pods/21e6e5f3-ac1d-48b6-871a-8a8d52cee775/volumes" Dec 06 06:49:49 crc kubenswrapper[4823]: I1206 06:49:49.163975 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="903bd44c-b50e-426d-a9c5-38923f26f220" path="/var/lib/kubelet/pods/903bd44c-b50e-426d-a9c5-38923f26f220/volumes" Dec 06 06:49:49 crc kubenswrapper[4823]: I1206 06:49:49.165100 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afc1dd4e-6ce9-45f3-8beb-37138f38a4df" path="/var/lib/kubelet/pods/afc1dd4e-6ce9-45f3-8beb-37138f38a4df/volumes" Dec 06 06:49:49 crc kubenswrapper[4823]: I1206 06:49:49.219313 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 06:49:49 crc kubenswrapper[4823]: I1206 06:49:49.349306 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac353c50-086b-4a10-9976-71287895e09f","Type":"ContainerStarted","Data":"573f276cc9e4bf7aad5aef0a9576854ec8b725a7be1e6520f12690546d6ef036"} Dec 06 06:49:49 crc kubenswrapper[4823]: I1206 06:49:49.353907 4823 generic.go:334] "Generic (PLEG): container finished" podID="53ac01ab-ea2b-4b2c-9e2e-dab4612351d5" containerID="5ebef798af4f48fab5049f43add77c2defb01ff161364f7cc081d96a1d477f59" exitCode=0 Dec 06 06:49:49 crc kubenswrapper[4823]: I1206 06:49:49.353989 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kgd2q" event={"ID":"53ac01ab-ea2b-4b2c-9e2e-dab4612351d5","Type":"ContainerDied","Data":"5ebef798af4f48fab5049f43add77c2defb01ff161364f7cc081d96a1d477f59"} Dec 06 06:49:49 crc kubenswrapper[4823]: I1206 06:49:49.363858 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"17bcdcc5-1bc7-456d-9574-d6fe00683166","Type":"ContainerStarted","Data":"b50c56e76a54547cca34dd072b59bd5371095ca471fc4c00ec95a1c9f8d4f725"} Dec 06 06:49:49 crc kubenswrapper[4823]: I1206 06:49:49.363936 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"17bcdcc5-1bc7-456d-9574-d6fe00683166","Type":"ContainerStarted","Data":"938a2223669fddc1cb8aaf78ff68285035cff8ad5e544466f4c898c0d8c743ca"} Dec 06 06:49:49 crc kubenswrapper[4823]: I1206 06:49:49.363949 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"17bcdcc5-1bc7-456d-9574-d6fe00683166","Type":"ContainerStarted","Data":"03007b8c113f1f915940f4ad4d4c4b634facad504d198412ba40901cb7892578"} Dec 06 06:49:49 crc kubenswrapper[4823]: I1206 06:49:49.400304 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.400281413 podStartE2EDuration="2.400281413s" podCreationTimestamp="2025-12-06 06:49:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:49:49.392075216 +0000 UTC m=+1490.677827176" watchObservedRunningTime="2025-12-06 06:49:49.400281413 +0000 UTC m=+1490.686033373" Dec 06 06:49:49 crc kubenswrapper[4823]: I1206 06:49:49.992194 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.104437 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/973019ce-1e95-4284-b2da-bd24519c84d0-combined-ca-bundle\") pod \"973019ce-1e95-4284-b2da-bd24519c84d0\" (UID: \"973019ce-1e95-4284-b2da-bd24519c84d0\") " Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.104643 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/973019ce-1e95-4284-b2da-bd24519c84d0-config-data\") pod \"973019ce-1e95-4284-b2da-bd24519c84d0\" (UID: \"973019ce-1e95-4284-b2da-bd24519c84d0\") " Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.104744 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cskz8\" (UniqueName: \"kubernetes.io/projected/973019ce-1e95-4284-b2da-bd24519c84d0-kube-api-access-cskz8\") pod \"973019ce-1e95-4284-b2da-bd24519c84d0\" (UID: \"973019ce-1e95-4284-b2da-bd24519c84d0\") " Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.104823 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/973019ce-1e95-4284-b2da-bd24519c84d0-logs\") pod \"973019ce-1e95-4284-b2da-bd24519c84d0\" (UID: \"973019ce-1e95-4284-b2da-bd24519c84d0\") " Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.105908 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/973019ce-1e95-4284-b2da-bd24519c84d0-logs" (OuterVolumeSpecName: "logs") pod "973019ce-1e95-4284-b2da-bd24519c84d0" (UID: "973019ce-1e95-4284-b2da-bd24519c84d0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.113090 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/973019ce-1e95-4284-b2da-bd24519c84d0-kube-api-access-cskz8" (OuterVolumeSpecName: "kube-api-access-cskz8") pod "973019ce-1e95-4284-b2da-bd24519c84d0" (UID: "973019ce-1e95-4284-b2da-bd24519c84d0"). InnerVolumeSpecName "kube-api-access-cskz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.157242 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/973019ce-1e95-4284-b2da-bd24519c84d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "973019ce-1e95-4284-b2da-bd24519c84d0" (UID: "973019ce-1e95-4284-b2da-bd24519c84d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.176785 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/973019ce-1e95-4284-b2da-bd24519c84d0-config-data" (OuterVolumeSpecName: "config-data") pod "973019ce-1e95-4284-b2da-bd24519c84d0" (UID: "973019ce-1e95-4284-b2da-bd24519c84d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.208896 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/973019ce-1e95-4284-b2da-bd24519c84d0-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.208942 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cskz8\" (UniqueName: \"kubernetes.io/projected/973019ce-1e95-4284-b2da-bd24519c84d0-kube-api-access-cskz8\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.209008 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/973019ce-1e95-4284-b2da-bd24519c84d0-logs\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.209025 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/973019ce-1e95-4284-b2da-bd24519c84d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.388815 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac353c50-086b-4a10-9976-71287895e09f","Type":"ContainerStarted","Data":"8050b2abf5072b7d532126d7aebbc18b68183dc967d9d10774a621c77ac09235"} Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.392986 4823 generic.go:334] "Generic (PLEG): container finished" podID="973019ce-1e95-4284-b2da-bd24519c84d0" containerID="2cd39d0077503ba79d94ab36dc98759dea1da7521c82bd508cb8dd460e15cbd6" exitCode=0 Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.394077 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.394300 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"973019ce-1e95-4284-b2da-bd24519c84d0","Type":"ContainerDied","Data":"2cd39d0077503ba79d94ab36dc98759dea1da7521c82bd508cb8dd460e15cbd6"} Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.394415 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"973019ce-1e95-4284-b2da-bd24519c84d0","Type":"ContainerDied","Data":"5c190cb2eb36084da2c8d92d713c40bc5085eebb6966af431272c83748ff6a51"} Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.394456 4823 scope.go:117] "RemoveContainer" containerID="2cd39d0077503ba79d94ab36dc98759dea1da7521c82bd508cb8dd460e15cbd6" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.419357 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.419326264 podStartE2EDuration="2.419326264s" podCreationTimestamp="2025-12-06 06:49:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:49:50.412143296 +0000 UTC m=+1491.697895336" watchObservedRunningTime="2025-12-06 06:49:50.419326264 +0000 UTC m=+1491.705078224" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.455472 4823 scope.go:117] "RemoveContainer" containerID="5d1686dc7c38850be4c4c04a27a2c167c40fafbdf7df3da677eaa287e6a86271" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.476902 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.524506 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.524568 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 06 06:49:50 crc kubenswrapper[4823]: E1206 06:49:50.525000 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="973019ce-1e95-4284-b2da-bd24519c84d0" containerName="nova-api-api" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.525023 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="973019ce-1e95-4284-b2da-bd24519c84d0" containerName="nova-api-api" Dec 06 06:49:50 crc kubenswrapper[4823]: E1206 06:49:50.525040 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="973019ce-1e95-4284-b2da-bd24519c84d0" containerName="nova-api-log" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.525046 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="973019ce-1e95-4284-b2da-bd24519c84d0" containerName="nova-api-log" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.525244 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="973019ce-1e95-4284-b2da-bd24519c84d0" containerName="nova-api-log" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.525260 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="973019ce-1e95-4284-b2da-bd24519c84d0" containerName="nova-api-api" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.526500 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.533352 4823 scope.go:117] "RemoveContainer" containerID="2cd39d0077503ba79d94ab36dc98759dea1da7521c82bd508cb8dd460e15cbd6" Dec 06 06:49:50 crc kubenswrapper[4823]: E1206 06:49:50.534103 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cd39d0077503ba79d94ab36dc98759dea1da7521c82bd508cb8dd460e15cbd6\": container with ID starting with 2cd39d0077503ba79d94ab36dc98759dea1da7521c82bd508cb8dd460e15cbd6 not found: ID does not exist" containerID="2cd39d0077503ba79d94ab36dc98759dea1da7521c82bd508cb8dd460e15cbd6" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.534146 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cd39d0077503ba79d94ab36dc98759dea1da7521c82bd508cb8dd460e15cbd6"} err="failed to get container status \"2cd39d0077503ba79d94ab36dc98759dea1da7521c82bd508cb8dd460e15cbd6\": rpc error: code = NotFound desc = could not find container \"2cd39d0077503ba79d94ab36dc98759dea1da7521c82bd508cb8dd460e15cbd6\": container with ID starting with 2cd39d0077503ba79d94ab36dc98759dea1da7521c82bd508cb8dd460e15cbd6 not found: ID does not exist" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.534176 4823 scope.go:117] "RemoveContainer" containerID="5d1686dc7c38850be4c4c04a27a2c167c40fafbdf7df3da677eaa287e6a86271" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.536452 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.536737 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 06:49:50 crc kubenswrapper[4823]: E1206 06:49:50.536752 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d1686dc7c38850be4c4c04a27a2c167c40fafbdf7df3da677eaa287e6a86271\": container with ID starting with 5d1686dc7c38850be4c4c04a27a2c167c40fafbdf7df3da677eaa287e6a86271 not found: ID does not exist" containerID="5d1686dc7c38850be4c4c04a27a2c167c40fafbdf7df3da677eaa287e6a86271" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.536784 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d1686dc7c38850be4c4c04a27a2c167c40fafbdf7df3da677eaa287e6a86271"} err="failed to get container status \"5d1686dc7c38850be4c4c04a27a2c167c40fafbdf7df3da677eaa287e6a86271\": rpc error: code = NotFound desc = could not find container \"5d1686dc7c38850be4c4c04a27a2c167c40fafbdf7df3da677eaa287e6a86271\": container with ID starting with 5d1686dc7c38850be4c4c04a27a2c167c40fafbdf7df3da677eaa287e6a86271 not found: ID does not exist" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.636860 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a9aa958-25be-4e23-a51d-f08b175ce5c6-config-data\") pod \"nova-api-0\" (UID: \"0a9aa958-25be-4e23-a51d-f08b175ce5c6\") " pod="openstack/nova-api-0" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.637062 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skng5\" (UniqueName: \"kubernetes.io/projected/0a9aa958-25be-4e23-a51d-f08b175ce5c6-kube-api-access-skng5\") pod \"nova-api-0\" (UID: \"0a9aa958-25be-4e23-a51d-f08b175ce5c6\") " pod="openstack/nova-api-0" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.637147 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a9aa958-25be-4e23-a51d-f08b175ce5c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a9aa958-25be-4e23-a51d-f08b175ce5c6\") " pod="openstack/nova-api-0" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.637187 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a9aa958-25be-4e23-a51d-f08b175ce5c6-logs\") pod \"nova-api-0\" (UID: \"0a9aa958-25be-4e23-a51d-f08b175ce5c6\") " pod="openstack/nova-api-0" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.738792 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skng5\" (UniqueName: \"kubernetes.io/projected/0a9aa958-25be-4e23-a51d-f08b175ce5c6-kube-api-access-skng5\") pod \"nova-api-0\" (UID: \"0a9aa958-25be-4e23-a51d-f08b175ce5c6\") " pod="openstack/nova-api-0" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.738882 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a9aa958-25be-4e23-a51d-f08b175ce5c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a9aa958-25be-4e23-a51d-f08b175ce5c6\") " pod="openstack/nova-api-0" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.738925 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a9aa958-25be-4e23-a51d-f08b175ce5c6-logs\") pod \"nova-api-0\" (UID: \"0a9aa958-25be-4e23-a51d-f08b175ce5c6\") " pod="openstack/nova-api-0" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.739013 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a9aa958-25be-4e23-a51d-f08b175ce5c6-config-data\") pod \"nova-api-0\" (UID: \"0a9aa958-25be-4e23-a51d-f08b175ce5c6\") " pod="openstack/nova-api-0" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.740284 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a9aa958-25be-4e23-a51d-f08b175ce5c6-logs\") pod \"nova-api-0\" (UID: \"0a9aa958-25be-4e23-a51d-f08b175ce5c6\") " pod="openstack/nova-api-0" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.754136 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a9aa958-25be-4e23-a51d-f08b175ce5c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a9aa958-25be-4e23-a51d-f08b175ce5c6\") " pod="openstack/nova-api-0" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.756199 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a9aa958-25be-4e23-a51d-f08b175ce5c6-config-data\") pod \"nova-api-0\" (UID: \"0a9aa958-25be-4e23-a51d-f08b175ce5c6\") " pod="openstack/nova-api-0" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.757369 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skng5\" (UniqueName: \"kubernetes.io/projected/0a9aa958-25be-4e23-a51d-f08b175ce5c6-kube-api-access-skng5\") pod \"nova-api-0\" (UID: \"0a9aa958-25be-4e23-a51d-f08b175ce5c6\") " pod="openstack/nova-api-0" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.877323 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.877883 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kgd2q" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.942836 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ac01ab-ea2b-4b2c-9e2e-dab4612351d5-config-data\") pod \"53ac01ab-ea2b-4b2c-9e2e-dab4612351d5\" (UID: \"53ac01ab-ea2b-4b2c-9e2e-dab4612351d5\") " Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.943354 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53ac01ab-ea2b-4b2c-9e2e-dab4612351d5-scripts\") pod \"53ac01ab-ea2b-4b2c-9e2e-dab4612351d5\" (UID: \"53ac01ab-ea2b-4b2c-9e2e-dab4612351d5\") " Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.943494 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49mgt\" (UniqueName: \"kubernetes.io/projected/53ac01ab-ea2b-4b2c-9e2e-dab4612351d5-kube-api-access-49mgt\") pod \"53ac01ab-ea2b-4b2c-9e2e-dab4612351d5\" (UID: \"53ac01ab-ea2b-4b2c-9e2e-dab4612351d5\") " Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.943602 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ac01ab-ea2b-4b2c-9e2e-dab4612351d5-combined-ca-bundle\") pod \"53ac01ab-ea2b-4b2c-9e2e-dab4612351d5\" (UID: \"53ac01ab-ea2b-4b2c-9e2e-dab4612351d5\") " Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.950084 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53ac01ab-ea2b-4b2c-9e2e-dab4612351d5-scripts" (OuterVolumeSpecName: "scripts") pod "53ac01ab-ea2b-4b2c-9e2e-dab4612351d5" (UID: "53ac01ab-ea2b-4b2c-9e2e-dab4612351d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.952533 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53ac01ab-ea2b-4b2c-9e2e-dab4612351d5-kube-api-access-49mgt" (OuterVolumeSpecName: "kube-api-access-49mgt") pod "53ac01ab-ea2b-4b2c-9e2e-dab4612351d5" (UID: "53ac01ab-ea2b-4b2c-9e2e-dab4612351d5"). InnerVolumeSpecName "kube-api-access-49mgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.983402 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53ac01ab-ea2b-4b2c-9e2e-dab4612351d5-config-data" (OuterVolumeSpecName: "config-data") pod "53ac01ab-ea2b-4b2c-9e2e-dab4612351d5" (UID: "53ac01ab-ea2b-4b2c-9e2e-dab4612351d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:50 crc kubenswrapper[4823]: I1206 06:49:50.984565 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53ac01ab-ea2b-4b2c-9e2e-dab4612351d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53ac01ab-ea2b-4b2c-9e2e-dab4612351d5" (UID: "53ac01ab-ea2b-4b2c-9e2e-dab4612351d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:51.046458 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49mgt\" (UniqueName: \"kubernetes.io/projected/53ac01ab-ea2b-4b2c-9e2e-dab4612351d5-kube-api-access-49mgt\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:51.046488 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ac01ab-ea2b-4b2c-9e2e-dab4612351d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:51.046500 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ac01ab-ea2b-4b2c-9e2e-dab4612351d5-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:51.046508 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53ac01ab-ea2b-4b2c-9e2e-dab4612351d5-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:51.156238 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="973019ce-1e95-4284-b2da-bd24519c84d0" path="/var/lib/kubelet/pods/973019ce-1e95-4284-b2da-bd24519c84d0/volumes" Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:51.420951 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kgd2q" event={"ID":"53ac01ab-ea2b-4b2c-9e2e-dab4612351d5","Type":"ContainerDied","Data":"1b0b6bb492a8aa84a2d7fadbcb9bb6f231dc07c1e2601552cf420188e3399931"} Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:51.421007 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b0b6bb492a8aa84a2d7fadbcb9bb6f231dc07c1e2601552cf420188e3399931" Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:51.421101 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kgd2q" Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:51.464816 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 06:49:52 crc kubenswrapper[4823]: E1206 06:49:51.465395 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ac01ab-ea2b-4b2c-9e2e-dab4612351d5" containerName="nova-cell1-conductor-db-sync" Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:51.465410 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ac01ab-ea2b-4b2c-9e2e-dab4612351d5" containerName="nova-cell1-conductor-db-sync" Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:51.465673 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ac01ab-ea2b-4b2c-9e2e-dab4612351d5" containerName="nova-cell1-conductor-db-sync" Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:51.466730 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:51.471573 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:51.481694 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:51.559986 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e153aa97-4c79-491e-8392-cd40d3a40d19-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e153aa97-4c79-491e-8392-cd40d3a40d19\") " pod="openstack/nova-cell1-conductor-0" Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:51.560049 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsf4d\" (UniqueName: \"kubernetes.io/projected/e153aa97-4c79-491e-8392-cd40d3a40d19-kube-api-access-tsf4d\") pod \"nova-cell1-conductor-0\" (UID: \"e153aa97-4c79-491e-8392-cd40d3a40d19\") " pod="openstack/nova-cell1-conductor-0" Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:51.560746 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e153aa97-4c79-491e-8392-cd40d3a40d19-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e153aa97-4c79-491e-8392-cd40d3a40d19\") " pod="openstack/nova-cell1-conductor-0" Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:51.662793 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e153aa97-4c79-491e-8392-cd40d3a40d19-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e153aa97-4c79-491e-8392-cd40d3a40d19\") " pod="openstack/nova-cell1-conductor-0" Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:51.662947 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsf4d\" (UniqueName: \"kubernetes.io/projected/e153aa97-4c79-491e-8392-cd40d3a40d19-kube-api-access-tsf4d\") pod \"nova-cell1-conductor-0\" (UID: \"e153aa97-4c79-491e-8392-cd40d3a40d19\") " pod="openstack/nova-cell1-conductor-0" Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:51.663191 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e153aa97-4c79-491e-8392-cd40d3a40d19-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e153aa97-4c79-491e-8392-cd40d3a40d19\") " pod="openstack/nova-cell1-conductor-0" Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:51.667467 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e153aa97-4c79-491e-8392-cd40d3a40d19-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e153aa97-4c79-491e-8392-cd40d3a40d19\") " pod="openstack/nova-cell1-conductor-0" Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:51.668055 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e153aa97-4c79-491e-8392-cd40d3a40d19-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e153aa97-4c79-491e-8392-cd40d3a40d19\") " pod="openstack/nova-cell1-conductor-0" Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:51.687933 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsf4d\" (UniqueName: \"kubernetes.io/projected/e153aa97-4c79-491e-8392-cd40d3a40d19-kube-api-access-tsf4d\") pod \"nova-cell1-conductor-0\" (UID: \"e153aa97-4c79-491e-8392-cd40d3a40d19\") " pod="openstack/nova-cell1-conductor-0" Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:51.789407 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 06 06:49:52 crc kubenswrapper[4823]: W1206 06:49:52.368736 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a9aa958_25be_4e23_a51d_f08b175ce5c6.slice/crio-3c8e67606534e4df3648203fc70bc0dbc40aa3f9980f047174b53fe9e7c19e40 WatchSource:0}: Error finding container 3c8e67606534e4df3648203fc70bc0dbc40aa3f9980f047174b53fe9e7c19e40: Status 404 returned error can't find the container with id 3c8e67606534e4df3648203fc70bc0dbc40aa3f9980f047174b53fe9e7c19e40 Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:52.371958 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:52.449219 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a9aa958-25be-4e23-a51d-f08b175ce5c6","Type":"ContainerStarted","Data":"3c8e67606534e4df3648203fc70bc0dbc40aa3f9980f047174b53fe9e7c19e40"} Dec 06 06:49:52 crc kubenswrapper[4823]: W1206 06:49:52.480698 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode153aa97_4c79_491e_8392_cd40d3a40d19.slice/crio-b84773dab0d60a223c66bf095b233ef863a69099fb3c6aced39af983fcedd744 WatchSource:0}: Error finding container b84773dab0d60a223c66bf095b233ef863a69099fb3c6aced39af983fcedd744: Status 404 returned error can't find the container with id b84773dab0d60a223c66bf095b233ef863a69099fb3c6aced39af983fcedd744 Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:52.485803 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 06:49:52 crc kubenswrapper[4823]: I1206 06:49:52.944021 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-54b97456bf-s7qh8" podUID="21e6e5f3-ac1d-48b6-871a-8a8d52cee775" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.186:5353: i/o timeout" Dec 06 06:49:53 crc kubenswrapper[4823]: I1206 06:49:53.104611 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 06:49:53 crc kubenswrapper[4823]: I1206 06:49:53.106198 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 06:49:53 crc kubenswrapper[4823]: I1206 06:49:53.461863 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a9aa958-25be-4e23-a51d-f08b175ce5c6","Type":"ContainerStarted","Data":"babe1ce126ec96ce515734f650f9fd130ac4c1093bac34c09e6c989b2c48df56"} Dec 06 06:49:53 crc kubenswrapper[4823]: I1206 06:49:53.462230 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a9aa958-25be-4e23-a51d-f08b175ce5c6","Type":"ContainerStarted","Data":"1937f979cfeb965f1f8341bdb85b27425d1ac9db2c6cc0362815110a07ded1e4"} Dec 06 06:49:53 crc kubenswrapper[4823]: I1206 06:49:53.464203 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e153aa97-4c79-491e-8392-cd40d3a40d19","Type":"ContainerStarted","Data":"8bd6aab5e67070f2b599c4460fc34d0fec7bb8fc8897afd9cb7d89b8d81a64cd"} Dec 06 06:49:53 crc kubenswrapper[4823]: I1206 06:49:53.464234 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e153aa97-4c79-491e-8392-cd40d3a40d19","Type":"ContainerStarted","Data":"b84773dab0d60a223c66bf095b233ef863a69099fb3c6aced39af983fcedd744"} Dec 06 06:49:53 crc kubenswrapper[4823]: I1206 06:49:53.464250 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 06 06:49:53 crc kubenswrapper[4823]: I1206 06:49:53.493404 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.493369031 podStartE2EDuration="3.493369031s" podCreationTimestamp="2025-12-06 06:49:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:49:53.490100327 +0000 UTC m=+1494.775852287" watchObservedRunningTime="2025-12-06 06:49:53.493369031 +0000 UTC m=+1494.779120991" Dec 06 06:49:53 crc kubenswrapper[4823]: I1206 06:49:53.527061 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.527034607 podStartE2EDuration="2.527034607s" podCreationTimestamp="2025-12-06 06:49:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:49:53.504500944 +0000 UTC m=+1494.790252914" watchObservedRunningTime="2025-12-06 06:49:53.527034607 +0000 UTC m=+1494.812786577" Dec 06 06:49:53 crc kubenswrapper[4823]: I1206 06:49:53.753905 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 06 06:49:57 crc kubenswrapper[4823]: I1206 06:49:57.155143 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 06 06:49:58 crc kubenswrapper[4823]: I1206 06:49:58.104402 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 06 06:49:58 crc kubenswrapper[4823]: I1206 06:49:58.104741 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 06 06:49:58 crc kubenswrapper[4823]: I1206 06:49:58.753768 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 06 06:49:58 crc kubenswrapper[4823]: I1206 06:49:58.790528 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 06 06:49:59 crc kubenswrapper[4823]: I1206 06:49:59.123955 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="17bcdcc5-1bc7-456d-9574-d6fe00683166" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.210:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 06:49:59 crc kubenswrapper[4823]: I1206 06:49:59.123969 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="17bcdcc5-1bc7-456d-9574-d6fe00683166" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.210:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 06:49:59 crc kubenswrapper[4823]: I1206 06:49:59.572018 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 06 06:50:00 crc kubenswrapper[4823]: I1206 06:50:00.878831 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 06:50:00 crc kubenswrapper[4823]: I1206 06:50:00.879144 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 06:50:01 crc kubenswrapper[4823]: I1206 06:50:01.131326 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 06:50:01 crc kubenswrapper[4823]: I1206 06:50:01.131637 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="de2d0c7c-d378-4a38-956d-56a576de5c21" containerName="kube-state-metrics" containerID="cri-o://d9932ec16bd69a77bb4af4b0c40d2c82a078aa359ca3fa24036c6414537d1360" gracePeriod=30 Dec 06 06:50:01 crc kubenswrapper[4823]: I1206 06:50:01.561717 4823 generic.go:334] "Generic (PLEG): container finished" podID="de2d0c7c-d378-4a38-956d-56a576de5c21" containerID="d9932ec16bd69a77bb4af4b0c40d2c82a078aa359ca3fa24036c6414537d1360" exitCode=2 Dec 06 06:50:01 crc kubenswrapper[4823]: I1206 06:50:01.562158 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"de2d0c7c-d378-4a38-956d-56a576de5c21","Type":"ContainerDied","Data":"d9932ec16bd69a77bb4af4b0c40d2c82a078aa359ca3fa24036c6414537d1360"} Dec 06 06:50:01 crc kubenswrapper[4823]: I1206 06:50:01.562209 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"de2d0c7c-d378-4a38-956d-56a576de5c21","Type":"ContainerDied","Data":"845fb7b05747f3c3fd4d56b1312f7e600e52397345780ad4f8c6b3e541a7da24"} Dec 06 06:50:01 crc kubenswrapper[4823]: I1206 06:50:01.562227 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="845fb7b05747f3c3fd4d56b1312f7e600e52397345780ad4f8c6b3e541a7da24" Dec 06 06:50:01 crc kubenswrapper[4823]: I1206 06:50:01.651238 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 06:50:01 crc kubenswrapper[4823]: I1206 06:50:01.734254 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z9lr\" (UniqueName: \"kubernetes.io/projected/de2d0c7c-d378-4a38-956d-56a576de5c21-kube-api-access-5z9lr\") pod \"de2d0c7c-d378-4a38-956d-56a576de5c21\" (UID: \"de2d0c7c-d378-4a38-956d-56a576de5c21\") " Dec 06 06:50:01 crc kubenswrapper[4823]: I1206 06:50:01.751608 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de2d0c7c-d378-4a38-956d-56a576de5c21-kube-api-access-5z9lr" (OuterVolumeSpecName: "kube-api-access-5z9lr") pod "de2d0c7c-d378-4a38-956d-56a576de5c21" (UID: "de2d0c7c-d378-4a38-956d-56a576de5c21"). InnerVolumeSpecName "kube-api-access-5z9lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:50:01 crc kubenswrapper[4823]: I1206 06:50:01.825988 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 06 06:50:01 crc kubenswrapper[4823]: I1206 06:50:01.837571 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z9lr\" (UniqueName: \"kubernetes.io/projected/de2d0c7c-d378-4a38-956d-56a576de5c21-kube-api-access-5z9lr\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:01 crc kubenswrapper[4823]: I1206 06:50:01.920304 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0a9aa958-25be-4e23-a51d-f08b175ce5c6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.212:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 06:50:01 crc kubenswrapper[4823]: I1206 06:50:01.961055 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0a9aa958-25be-4e23-a51d-f08b175ce5c6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.212:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 06:50:02 crc kubenswrapper[4823]: I1206 06:50:02.572523 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 06:50:02 crc kubenswrapper[4823]: I1206 06:50:02.607095 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 06:50:02 crc kubenswrapper[4823]: I1206 06:50:02.621812 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 06:50:02 crc kubenswrapper[4823]: I1206 06:50:02.636097 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 06:50:02 crc kubenswrapper[4823]: E1206 06:50:02.636687 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de2d0c7c-d378-4a38-956d-56a576de5c21" containerName="kube-state-metrics" Dec 06 06:50:02 crc kubenswrapper[4823]: I1206 06:50:02.636714 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="de2d0c7c-d378-4a38-956d-56a576de5c21" containerName="kube-state-metrics" Dec 06 06:50:02 crc kubenswrapper[4823]: I1206 06:50:02.636993 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="de2d0c7c-d378-4a38-956d-56a576de5c21" containerName="kube-state-metrics" Dec 06 06:50:02 crc kubenswrapper[4823]: I1206 06:50:02.638004 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 06:50:02 crc kubenswrapper[4823]: I1206 06:50:02.641368 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 06 06:50:02 crc kubenswrapper[4823]: I1206 06:50:02.642300 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 06 06:50:02 crc kubenswrapper[4823]: I1206 06:50:02.662001 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 06:50:02 crc kubenswrapper[4823]: I1206 06:50:02.756824 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62548e33-ebf2-47ed-b520-84fb85791699-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"62548e33-ebf2-47ed-b520-84fb85791699\") " pod="openstack/kube-state-metrics-0" Dec 06 06:50:02 crc kubenswrapper[4823]: I1206 06:50:02.756979 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/62548e33-ebf2-47ed-b520-84fb85791699-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"62548e33-ebf2-47ed-b520-84fb85791699\") " pod="openstack/kube-state-metrics-0" Dec 06 06:50:02 crc kubenswrapper[4823]: I1206 06:50:02.757005 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/62548e33-ebf2-47ed-b520-84fb85791699-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"62548e33-ebf2-47ed-b520-84fb85791699\") " pod="openstack/kube-state-metrics-0" Dec 06 06:50:02 crc kubenswrapper[4823]: I1206 06:50:02.757043 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr2ff\" (UniqueName: \"kubernetes.io/projected/62548e33-ebf2-47ed-b520-84fb85791699-kube-api-access-pr2ff\") pod \"kube-state-metrics-0\" (UID: \"62548e33-ebf2-47ed-b520-84fb85791699\") " pod="openstack/kube-state-metrics-0" Dec 06 06:50:02 crc kubenswrapper[4823]: I1206 06:50:02.858715 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/62548e33-ebf2-47ed-b520-84fb85791699-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"62548e33-ebf2-47ed-b520-84fb85791699\") " pod="openstack/kube-state-metrics-0" Dec 06 06:50:02 crc kubenswrapper[4823]: I1206 06:50:02.858789 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/62548e33-ebf2-47ed-b520-84fb85791699-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"62548e33-ebf2-47ed-b520-84fb85791699\") " pod="openstack/kube-state-metrics-0" Dec 06 06:50:02 crc kubenswrapper[4823]: I1206 06:50:02.858868 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr2ff\" (UniqueName: \"kubernetes.io/projected/62548e33-ebf2-47ed-b520-84fb85791699-kube-api-access-pr2ff\") pod \"kube-state-metrics-0\" (UID: \"62548e33-ebf2-47ed-b520-84fb85791699\") " pod="openstack/kube-state-metrics-0" Dec 06 06:50:02 crc kubenswrapper[4823]: I1206 06:50:02.858976 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62548e33-ebf2-47ed-b520-84fb85791699-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"62548e33-ebf2-47ed-b520-84fb85791699\") " pod="openstack/kube-state-metrics-0" Dec 06 06:50:02 crc kubenswrapper[4823]: I1206 06:50:02.876946 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/62548e33-ebf2-47ed-b520-84fb85791699-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"62548e33-ebf2-47ed-b520-84fb85791699\") " pod="openstack/kube-state-metrics-0" Dec 06 06:50:02 crc kubenswrapper[4823]: I1206 06:50:02.877814 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/62548e33-ebf2-47ed-b520-84fb85791699-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"62548e33-ebf2-47ed-b520-84fb85791699\") " pod="openstack/kube-state-metrics-0" Dec 06 06:50:02 crc kubenswrapper[4823]: I1206 06:50:02.880063 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62548e33-ebf2-47ed-b520-84fb85791699-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"62548e33-ebf2-47ed-b520-84fb85791699\") " pod="openstack/kube-state-metrics-0" Dec 06 06:50:02 crc kubenswrapper[4823]: I1206 06:50:02.882100 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr2ff\" (UniqueName: \"kubernetes.io/projected/62548e33-ebf2-47ed-b520-84fb85791699-kube-api-access-pr2ff\") pod \"kube-state-metrics-0\" (UID: \"62548e33-ebf2-47ed-b520-84fb85791699\") " pod="openstack/kube-state-metrics-0" Dec 06 06:50:02 crc kubenswrapper[4823]: I1206 06:50:02.966611 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 06:50:03 crc kubenswrapper[4823]: I1206 06:50:03.155456 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de2d0c7c-d378-4a38-956d-56a576de5c21" path="/var/lib/kubelet/pods/de2d0c7c-d378-4a38-956d-56a576de5c21/volumes" Dec 06 06:50:03 crc kubenswrapper[4823]: I1206 06:50:03.271220 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:50:03 crc kubenswrapper[4823]: I1206 06:50:03.271803 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a4cf458-c626-43f1-ac23-1054c38e7645" containerName="ceilometer-central-agent" containerID="cri-o://78f023ea41c0203f1edc73c59c63a12b5ad719ce0c49b7f8dfcadb9f7d82bc23" gracePeriod=30 Dec 06 06:50:03 crc kubenswrapper[4823]: I1206 06:50:03.272431 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a4cf458-c626-43f1-ac23-1054c38e7645" containerName="proxy-httpd" containerID="cri-o://9ef8800e027d1803b9587c5b9e9e16dd2ca3ebdb5568c45d946d1b887723892a" gracePeriod=30 Dec 06 06:50:03 crc kubenswrapper[4823]: I1206 06:50:03.272498 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a4cf458-c626-43f1-ac23-1054c38e7645" containerName="sg-core" containerID="cri-o://e931905d8f8211e215f09f286239ac6baf8f48ca0741160a528ae6a669d737e3" gracePeriod=30 Dec 06 06:50:03 crc kubenswrapper[4823]: I1206 06:50:03.272545 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a4cf458-c626-43f1-ac23-1054c38e7645" containerName="ceilometer-notification-agent" containerID="cri-o://8af0c28e888fa4d5793ae5d4e506f85d3334a9e47e2f248128efd535ab7bace7" gracePeriod=30 Dec 06 06:50:03 crc kubenswrapper[4823]: I1206 06:50:03.542954 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 06:50:03 crc kubenswrapper[4823]: W1206 06:50:03.542998 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62548e33_ebf2_47ed_b520_84fb85791699.slice/crio-334c2d996b60b61ad0200554c3caf72429b57e38596c4c22b6b556cdd0f45690 WatchSource:0}: Error finding container 334c2d996b60b61ad0200554c3caf72429b57e38596c4c22b6b556cdd0f45690: Status 404 returned error can't find the container with id 334c2d996b60b61ad0200554c3caf72429b57e38596c4c22b6b556cdd0f45690 Dec 06 06:50:03 crc kubenswrapper[4823]: I1206 06:50:03.585492 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"62548e33-ebf2-47ed-b520-84fb85791699","Type":"ContainerStarted","Data":"334c2d996b60b61ad0200554c3caf72429b57e38596c4c22b6b556cdd0f45690"} Dec 06 06:50:03 crc kubenswrapper[4823]: I1206 06:50:03.589518 4823 generic.go:334] "Generic (PLEG): container finished" podID="8a4cf458-c626-43f1-ac23-1054c38e7645" containerID="9ef8800e027d1803b9587c5b9e9e16dd2ca3ebdb5568c45d946d1b887723892a" exitCode=0 Dec 06 06:50:03 crc kubenswrapper[4823]: I1206 06:50:03.589561 4823 generic.go:334] "Generic (PLEG): container finished" podID="8a4cf458-c626-43f1-ac23-1054c38e7645" containerID="e931905d8f8211e215f09f286239ac6baf8f48ca0741160a528ae6a669d737e3" exitCode=2 Dec 06 06:50:03 crc kubenswrapper[4823]: I1206 06:50:03.589607 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a4cf458-c626-43f1-ac23-1054c38e7645","Type":"ContainerDied","Data":"9ef8800e027d1803b9587c5b9e9e16dd2ca3ebdb5568c45d946d1b887723892a"} Dec 06 06:50:03 crc kubenswrapper[4823]: I1206 06:50:03.589708 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a4cf458-c626-43f1-ac23-1054c38e7645","Type":"ContainerDied","Data":"e931905d8f8211e215f09f286239ac6baf8f48ca0741160a528ae6a669d737e3"} Dec 06 06:50:04 crc kubenswrapper[4823]: I1206 06:50:04.604612 4823 generic.go:334] "Generic (PLEG): container finished" podID="8a4cf458-c626-43f1-ac23-1054c38e7645" containerID="78f023ea41c0203f1edc73c59c63a12b5ad719ce0c49b7f8dfcadb9f7d82bc23" exitCode=0 Dec 06 06:50:04 crc kubenswrapper[4823]: I1206 06:50:04.604706 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a4cf458-c626-43f1-ac23-1054c38e7645","Type":"ContainerDied","Data":"78f023ea41c0203f1edc73c59c63a12b5ad719ce0c49b7f8dfcadb9f7d82bc23"} Dec 06 06:50:04 crc kubenswrapper[4823]: I1206 06:50:04.608501 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"62548e33-ebf2-47ed-b520-84fb85791699","Type":"ContainerStarted","Data":"b8c397e3a7031a8efc976483e2069a11c40957d597a631e53a378abdbd2722dc"} Dec 06 06:50:04 crc kubenswrapper[4823]: I1206 06:50:04.608637 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 06 06:50:04 crc kubenswrapper[4823]: I1206 06:50:04.638439 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.23728814 podStartE2EDuration="2.638413051s" podCreationTimestamp="2025-12-06 06:50:02 +0000 UTC" firstStartedPulling="2025-12-06 06:50:03.546256658 +0000 UTC m=+1504.832008618" lastFinishedPulling="2025-12-06 06:50:03.947381569 +0000 UTC m=+1505.233133529" observedRunningTime="2025-12-06 06:50:04.625032023 +0000 UTC m=+1505.910783983" watchObservedRunningTime="2025-12-06 06:50:04.638413051 +0000 UTC m=+1505.924165011" Dec 06 06:50:05 crc kubenswrapper[4823]: I1206 06:50:05.623743 4823 generic.go:334] "Generic (PLEG): container finished" podID="8a4cf458-c626-43f1-ac23-1054c38e7645" containerID="8af0c28e888fa4d5793ae5d4e506f85d3334a9e47e2f248128efd535ab7bace7" exitCode=0 Dec 06 06:50:05 crc kubenswrapper[4823]: I1206 06:50:05.623807 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a4cf458-c626-43f1-ac23-1054c38e7645","Type":"ContainerDied","Data":"8af0c28e888fa4d5793ae5d4e506f85d3334a9e47e2f248128efd535ab7bace7"} Dec 06 06:50:05 crc kubenswrapper[4823]: I1206 06:50:05.624707 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a4cf458-c626-43f1-ac23-1054c38e7645","Type":"ContainerDied","Data":"11cbd661d442c055bb4bc49975f706cbbf66ea421eb5381e0bf48ed75ba44144"} Dec 06 06:50:05 crc kubenswrapper[4823]: I1206 06:50:05.624733 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11cbd661d442c055bb4bc49975f706cbbf66ea421eb5381e0bf48ed75ba44144" Dec 06 06:50:05 crc kubenswrapper[4823]: I1206 06:50:05.703732 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 06:50:05 crc kubenswrapper[4823]: I1206 06:50:05.833703 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a4cf458-c626-43f1-ac23-1054c38e7645-run-httpd\") pod \"8a4cf458-c626-43f1-ac23-1054c38e7645\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " Dec 06 06:50:05 crc kubenswrapper[4823]: I1206 06:50:05.833804 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a4cf458-c626-43f1-ac23-1054c38e7645-sg-core-conf-yaml\") pod \"8a4cf458-c626-43f1-ac23-1054c38e7645\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " Dec 06 06:50:05 crc kubenswrapper[4823]: I1206 06:50:05.833998 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a4cf458-c626-43f1-ac23-1054c38e7645-log-httpd\") pod \"8a4cf458-c626-43f1-ac23-1054c38e7645\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " Dec 06 06:50:05 crc kubenswrapper[4823]: I1206 06:50:05.834072 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a4cf458-c626-43f1-ac23-1054c38e7645-scripts\") pod \"8a4cf458-c626-43f1-ac23-1054c38e7645\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " Dec 06 06:50:05 crc kubenswrapper[4823]: I1206 06:50:05.834103 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a4cf458-c626-43f1-ac23-1054c38e7645-config-data\") pod \"8a4cf458-c626-43f1-ac23-1054c38e7645\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " Dec 06 06:50:05 crc kubenswrapper[4823]: I1206 06:50:05.834163 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4cf458-c626-43f1-ac23-1054c38e7645-combined-ca-bundle\") pod \"8a4cf458-c626-43f1-ac23-1054c38e7645\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " Dec 06 06:50:05 crc kubenswrapper[4823]: I1206 06:50:05.834234 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6s77x\" (UniqueName: \"kubernetes.io/projected/8a4cf458-c626-43f1-ac23-1054c38e7645-kube-api-access-6s77x\") pod \"8a4cf458-c626-43f1-ac23-1054c38e7645\" (UID: \"8a4cf458-c626-43f1-ac23-1054c38e7645\") " Dec 06 06:50:05 crc kubenswrapper[4823]: I1206 06:50:05.834654 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a4cf458-c626-43f1-ac23-1054c38e7645-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8a4cf458-c626-43f1-ac23-1054c38e7645" (UID: "8a4cf458-c626-43f1-ac23-1054c38e7645"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:50:05 crc kubenswrapper[4823]: I1206 06:50:05.834838 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a4cf458-c626-43f1-ac23-1054c38e7645-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8a4cf458-c626-43f1-ac23-1054c38e7645" (UID: "8a4cf458-c626-43f1-ac23-1054c38e7645"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:50:05 crc kubenswrapper[4823]: I1206 06:50:05.841942 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a4cf458-c626-43f1-ac23-1054c38e7645-kube-api-access-6s77x" (OuterVolumeSpecName: "kube-api-access-6s77x") pod "8a4cf458-c626-43f1-ac23-1054c38e7645" (UID: "8a4cf458-c626-43f1-ac23-1054c38e7645"). InnerVolumeSpecName "kube-api-access-6s77x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:50:05 crc kubenswrapper[4823]: I1206 06:50:05.854332 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a4cf458-c626-43f1-ac23-1054c38e7645-scripts" (OuterVolumeSpecName: "scripts") pod "8a4cf458-c626-43f1-ac23-1054c38e7645" (UID: "8a4cf458-c626-43f1-ac23-1054c38e7645"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:50:05 crc kubenswrapper[4823]: I1206 06:50:05.866867 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a4cf458-c626-43f1-ac23-1054c38e7645-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8a4cf458-c626-43f1-ac23-1054c38e7645" (UID: "8a4cf458-c626-43f1-ac23-1054c38e7645"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:50:05 crc kubenswrapper[4823]: I1206 06:50:05.936604 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6s77x\" (UniqueName: \"kubernetes.io/projected/8a4cf458-c626-43f1-ac23-1054c38e7645-kube-api-access-6s77x\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:05 crc kubenswrapper[4823]: I1206 06:50:05.936648 4823 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a4cf458-c626-43f1-ac23-1054c38e7645-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:05 crc kubenswrapper[4823]: I1206 06:50:05.936680 4823 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a4cf458-c626-43f1-ac23-1054c38e7645-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:05 crc kubenswrapper[4823]: I1206 06:50:05.936695 4823 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a4cf458-c626-43f1-ac23-1054c38e7645-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:05 crc kubenswrapper[4823]: I1206 06:50:05.936705 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a4cf458-c626-43f1-ac23-1054c38e7645-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:05 crc kubenswrapper[4823]: I1206 06:50:05.947586 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a4cf458-c626-43f1-ac23-1054c38e7645-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a4cf458-c626-43f1-ac23-1054c38e7645" (UID: "8a4cf458-c626-43f1-ac23-1054c38e7645"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:50:05 crc kubenswrapper[4823]: I1206 06:50:05.963493 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a4cf458-c626-43f1-ac23-1054c38e7645-config-data" (OuterVolumeSpecName: "config-data") pod "8a4cf458-c626-43f1-ac23-1054c38e7645" (UID: "8a4cf458-c626-43f1-ac23-1054c38e7645"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.039116 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4cf458-c626-43f1-ac23-1054c38e7645-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.039153 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a4cf458-c626-43f1-ac23-1054c38e7645-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.634835 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.712760 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.729415 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.744740 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:50:06 crc kubenswrapper[4823]: E1206 06:50:06.745418 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a4cf458-c626-43f1-ac23-1054c38e7645" containerName="ceilometer-notification-agent" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.745451 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a4cf458-c626-43f1-ac23-1054c38e7645" containerName="ceilometer-notification-agent" Dec 06 06:50:06 crc kubenswrapper[4823]: E1206 06:50:06.745469 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a4cf458-c626-43f1-ac23-1054c38e7645" containerName="proxy-httpd" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.745478 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a4cf458-c626-43f1-ac23-1054c38e7645" containerName="proxy-httpd" Dec 06 06:50:06 crc kubenswrapper[4823]: E1206 06:50:06.745500 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a4cf458-c626-43f1-ac23-1054c38e7645" containerName="sg-core" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.745508 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a4cf458-c626-43f1-ac23-1054c38e7645" containerName="sg-core" Dec 06 06:50:06 crc kubenswrapper[4823]: E1206 06:50:06.745530 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a4cf458-c626-43f1-ac23-1054c38e7645" containerName="ceilometer-central-agent" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.745540 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a4cf458-c626-43f1-ac23-1054c38e7645" containerName="ceilometer-central-agent" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.745964 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a4cf458-c626-43f1-ac23-1054c38e7645" containerName="ceilometer-central-agent" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.746008 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a4cf458-c626-43f1-ac23-1054c38e7645" containerName="sg-core" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.746029 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a4cf458-c626-43f1-ac23-1054c38e7645" containerName="proxy-httpd" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.746041 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a4cf458-c626-43f1-ac23-1054c38e7645" containerName="ceilometer-notification-agent" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.749012 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.751888 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.751914 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.751958 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.760157 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.856861 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " pod="openstack/ceilometer-0" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.856948 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " pod="openstack/ceilometer-0" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.856978 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7b2649e-94bb-49cd-82c4-347a3dc94606-run-httpd\") pod \"ceilometer-0\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " pod="openstack/ceilometer-0" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.857054 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " pod="openstack/ceilometer-0" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.857077 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-scripts\") pod \"ceilometer-0\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " pod="openstack/ceilometer-0" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.857098 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qdwf\" (UniqueName: \"kubernetes.io/projected/a7b2649e-94bb-49cd-82c4-347a3dc94606-kube-api-access-7qdwf\") pod \"ceilometer-0\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " pod="openstack/ceilometer-0" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.857186 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-config-data\") pod \"ceilometer-0\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " pod="openstack/ceilometer-0" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.857269 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7b2649e-94bb-49cd-82c4-347a3dc94606-log-httpd\") pod \"ceilometer-0\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " pod="openstack/ceilometer-0" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.958822 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7b2649e-94bb-49cd-82c4-347a3dc94606-log-httpd\") pod \"ceilometer-0\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " pod="openstack/ceilometer-0" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.959512 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7b2649e-94bb-49cd-82c4-347a3dc94606-log-httpd\") pod \"ceilometer-0\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " pod="openstack/ceilometer-0" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.959546 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " pod="openstack/ceilometer-0" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.959759 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " pod="openstack/ceilometer-0" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.959828 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7b2649e-94bb-49cd-82c4-347a3dc94606-run-httpd\") pod \"ceilometer-0\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " pod="openstack/ceilometer-0" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.960057 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " pod="openstack/ceilometer-0" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.960109 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-scripts\") pod \"ceilometer-0\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " pod="openstack/ceilometer-0" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.960147 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qdwf\" (UniqueName: \"kubernetes.io/projected/a7b2649e-94bb-49cd-82c4-347a3dc94606-kube-api-access-7qdwf\") pod \"ceilometer-0\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " pod="openstack/ceilometer-0" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.960283 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-config-data\") pod \"ceilometer-0\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " pod="openstack/ceilometer-0" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.960931 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7b2649e-94bb-49cd-82c4-347a3dc94606-run-httpd\") pod \"ceilometer-0\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " pod="openstack/ceilometer-0" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.967491 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-scripts\") pod \"ceilometer-0\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " pod="openstack/ceilometer-0" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.967518 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " pod="openstack/ceilometer-0" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.967672 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " pod="openstack/ceilometer-0" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.969884 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-config-data\") pod \"ceilometer-0\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " pod="openstack/ceilometer-0" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.973287 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " pod="openstack/ceilometer-0" Dec 06 06:50:06 crc kubenswrapper[4823]: I1206 06:50:06.982134 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qdwf\" (UniqueName: \"kubernetes.io/projected/a7b2649e-94bb-49cd-82c4-347a3dc94606-kube-api-access-7qdwf\") pod \"ceilometer-0\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " pod="openstack/ceilometer-0" Dec 06 06:50:07 crc kubenswrapper[4823]: I1206 06:50:07.073334 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 06:50:07 crc kubenswrapper[4823]: I1206 06:50:07.176145 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a4cf458-c626-43f1-ac23-1054c38e7645" path="/var/lib/kubelet/pods/8a4cf458-c626-43f1-ac23-1054c38e7645/volumes" Dec 06 06:50:07 crc kubenswrapper[4823]: W1206 06:50:07.717464 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7b2649e_94bb_49cd_82c4_347a3dc94606.slice/crio-11c5c7e955d71db4835f31857bee538fa60a91a311af3d7819ed903b8fad29d6 WatchSource:0}: Error finding container 11c5c7e955d71db4835f31857bee538fa60a91a311af3d7819ed903b8fad29d6: Status 404 returned error can't find the container with id 11c5c7e955d71db4835f31857bee538fa60a91a311af3d7819ed903b8fad29d6 Dec 06 06:50:07 crc kubenswrapper[4823]: I1206 06:50:07.723320 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:50:08 crc kubenswrapper[4823]: I1206 06:50:08.110617 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 06 06:50:08 crc kubenswrapper[4823]: I1206 06:50:08.116245 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 06 06:50:08 crc kubenswrapper[4823]: I1206 06:50:08.119080 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 06 06:50:08 crc kubenswrapper[4823]: I1206 06:50:08.658604 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7b2649e-94bb-49cd-82c4-347a3dc94606","Type":"ContainerStarted","Data":"d2ca7d7231cb090d510b1455f00e58d960594cc2ba758678c27dc1c195dfb97c"} Dec 06 06:50:08 crc kubenswrapper[4823]: I1206 06:50:08.658955 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7b2649e-94bb-49cd-82c4-347a3dc94606","Type":"ContainerStarted","Data":"a6b0e7ca9ece2cb48926fd293a2bc1e5e0ff0a1b7874c8a6f9a7e53d61926856"} Dec 06 06:50:08 crc kubenswrapper[4823]: I1206 06:50:08.658996 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7b2649e-94bb-49cd-82c4-347a3dc94606","Type":"ContainerStarted","Data":"11c5c7e955d71db4835f31857bee538fa60a91a311af3d7819ed903b8fad29d6"} Dec 06 06:50:08 crc kubenswrapper[4823]: I1206 06:50:08.665039 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 06 06:50:09 crc kubenswrapper[4823]: I1206 06:50:09.670481 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7b2649e-94bb-49cd-82c4-347a3dc94606","Type":"ContainerStarted","Data":"87fe23657bf784b1364b53b67b72861e35afefdbe93a84bd8bd732d799c33e11"} Dec 06 06:50:10 crc kubenswrapper[4823]: I1206 06:50:10.684348 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7b2649e-94bb-49cd-82c4-347a3dc94606","Type":"ContainerStarted","Data":"5a9ba00dc5364d1acbdc5e3a865ebd864ad0d02c64cd2b5b2c18448a8f155a85"} Dec 06 06:50:10 crc kubenswrapper[4823]: I1206 06:50:10.717248 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.5419051919999998 podStartE2EDuration="4.717225971s" podCreationTimestamp="2025-12-06 06:50:06 +0000 UTC" firstStartedPulling="2025-12-06 06:50:07.720031819 +0000 UTC m=+1509.005783779" lastFinishedPulling="2025-12-06 06:50:09.895352598 +0000 UTC m=+1511.181104558" observedRunningTime="2025-12-06 06:50:10.708842288 +0000 UTC m=+1511.994594258" watchObservedRunningTime="2025-12-06 06:50:10.717225971 +0000 UTC m=+1512.002977931" Dec 06 06:50:10 crc kubenswrapper[4823]: I1206 06:50:10.887452 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 06 06:50:10 crc kubenswrapper[4823]: I1206 06:50:10.888152 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 06 06:50:10 crc kubenswrapper[4823]: I1206 06:50:10.889638 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 06 06:50:10 crc kubenswrapper[4823]: I1206 06:50:10.895903 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 06 06:50:11 crc kubenswrapper[4823]: I1206 06:50:11.588551 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:50:11 crc kubenswrapper[4823]: I1206 06:50:11.695614 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/228fbc4e-4973-4671-a38c-b74baed39e11-combined-ca-bundle\") pod \"228fbc4e-4973-4671-a38c-b74baed39e11\" (UID: \"228fbc4e-4973-4671-a38c-b74baed39e11\") " Dec 06 06:50:11 crc kubenswrapper[4823]: I1206 06:50:11.695836 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/228fbc4e-4973-4671-a38c-b74baed39e11-config-data\") pod \"228fbc4e-4973-4671-a38c-b74baed39e11\" (UID: \"228fbc4e-4973-4671-a38c-b74baed39e11\") " Dec 06 06:50:11 crc kubenswrapper[4823]: I1206 06:50:11.695932 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f559q\" (UniqueName: \"kubernetes.io/projected/228fbc4e-4973-4671-a38c-b74baed39e11-kube-api-access-f559q\") pod \"228fbc4e-4973-4671-a38c-b74baed39e11\" (UID: \"228fbc4e-4973-4671-a38c-b74baed39e11\") " Dec 06 06:50:11 crc kubenswrapper[4823]: I1206 06:50:11.698855 4823 generic.go:334] "Generic (PLEG): container finished" podID="228fbc4e-4973-4671-a38c-b74baed39e11" containerID="d470c0f7964719016cf5231eed42ff866f0e4612c444800873c75129acdffe57" exitCode=137 Dec 06 06:50:11 crc kubenswrapper[4823]: I1206 06:50:11.699116 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:50:11 crc kubenswrapper[4823]: I1206 06:50:11.699174 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"228fbc4e-4973-4671-a38c-b74baed39e11","Type":"ContainerDied","Data":"d470c0f7964719016cf5231eed42ff866f0e4612c444800873c75129acdffe57"} Dec 06 06:50:11 crc kubenswrapper[4823]: I1206 06:50:11.699213 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"228fbc4e-4973-4671-a38c-b74baed39e11","Type":"ContainerDied","Data":"ab66030316de03f14baf7132caa043884a11307aa905397b94b5edde66efeb9b"} Dec 06 06:50:11 crc kubenswrapper[4823]: I1206 06:50:11.699236 4823 scope.go:117] "RemoveContainer" containerID="d470c0f7964719016cf5231eed42ff866f0e4612c444800873c75129acdffe57" Dec 06 06:50:11 crc kubenswrapper[4823]: I1206 06:50:11.701069 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 06:50:11 crc kubenswrapper[4823]: I1206 06:50:11.701768 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/228fbc4e-4973-4671-a38c-b74baed39e11-kube-api-access-f559q" (OuterVolumeSpecName: "kube-api-access-f559q") pod "228fbc4e-4973-4671-a38c-b74baed39e11" (UID: "228fbc4e-4973-4671-a38c-b74baed39e11"). InnerVolumeSpecName "kube-api-access-f559q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:50:11 crc kubenswrapper[4823]: I1206 06:50:11.702027 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 06 06:50:11 crc kubenswrapper[4823]: I1206 06:50:11.726443 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 06 06:50:11 crc kubenswrapper[4823]: I1206 06:50:11.735636 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/228fbc4e-4973-4671-a38c-b74baed39e11-config-data" (OuterVolumeSpecName: "config-data") pod "228fbc4e-4973-4671-a38c-b74baed39e11" (UID: "228fbc4e-4973-4671-a38c-b74baed39e11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:50:11 crc kubenswrapper[4823]: I1206 06:50:11.773458 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/228fbc4e-4973-4671-a38c-b74baed39e11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "228fbc4e-4973-4671-a38c-b74baed39e11" (UID: "228fbc4e-4973-4671-a38c-b74baed39e11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:50:11 crc kubenswrapper[4823]: I1206 06:50:11.799035 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/228fbc4e-4973-4671-a38c-b74baed39e11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:11 crc kubenswrapper[4823]: I1206 06:50:11.799067 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/228fbc4e-4973-4671-a38c-b74baed39e11-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:11 crc kubenswrapper[4823]: I1206 06:50:11.799080 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f559q\" (UniqueName: \"kubernetes.io/projected/228fbc4e-4973-4671-a38c-b74baed39e11-kube-api-access-f559q\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:11 crc kubenswrapper[4823]: I1206 06:50:11.865765 4823 scope.go:117] "RemoveContainer" containerID="d470c0f7964719016cf5231eed42ff866f0e4612c444800873c75129acdffe57" Dec 06 06:50:11 crc kubenswrapper[4823]: E1206 06:50:11.866544 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d470c0f7964719016cf5231eed42ff866f0e4612c444800873c75129acdffe57\": container with ID starting with d470c0f7964719016cf5231eed42ff866f0e4612c444800873c75129acdffe57 not found: ID does not exist" containerID="d470c0f7964719016cf5231eed42ff866f0e4612c444800873c75129acdffe57" Dec 06 06:50:11 crc kubenswrapper[4823]: I1206 06:50:11.866693 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d470c0f7964719016cf5231eed42ff866f0e4612c444800873c75129acdffe57"} err="failed to get container status \"d470c0f7964719016cf5231eed42ff866f0e4612c444800873c75129acdffe57\": rpc error: code = NotFound desc = could not find container \"d470c0f7964719016cf5231eed42ff866f0e4612c444800873c75129acdffe57\": container with ID starting with d470c0f7964719016cf5231eed42ff866f0e4612c444800873c75129acdffe57 not found: ID does not exist" Dec 06 06:50:11 crc kubenswrapper[4823]: I1206 06:50:11.923949 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-657f5df845-fq9wm"] Dec 06 06:50:11 crc kubenswrapper[4823]: E1206 06:50:11.924540 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="228fbc4e-4973-4671-a38c-b74baed39e11" containerName="nova-cell1-novncproxy-novncproxy" Dec 06 06:50:11 crc kubenswrapper[4823]: I1206 06:50:11.924571 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="228fbc4e-4973-4671-a38c-b74baed39e11" containerName="nova-cell1-novncproxy-novncproxy" Dec 06 06:50:11 crc kubenswrapper[4823]: I1206 06:50:11.924873 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="228fbc4e-4973-4671-a38c-b74baed39e11" containerName="nova-cell1-novncproxy-novncproxy" Dec 06 06:50:11 crc kubenswrapper[4823]: I1206 06:50:11.926000 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-657f5df845-fq9wm" Dec 06 06:50:11 crc kubenswrapper[4823]: I1206 06:50:11.953765 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-657f5df845-fq9wm"] Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.008260 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-dns-svc\") pod \"dnsmasq-dns-657f5df845-fq9wm\" (UID: \"d5160c40-0e83-445f-bf12-b4530306aaaf\") " pod="openstack/dnsmasq-dns-657f5df845-fq9wm" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.008329 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-config\") pod \"dnsmasq-dns-657f5df845-fq9wm\" (UID: \"d5160c40-0e83-445f-bf12-b4530306aaaf\") " pod="openstack/dnsmasq-dns-657f5df845-fq9wm" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.008377 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k5mw\" (UniqueName: \"kubernetes.io/projected/d5160c40-0e83-445f-bf12-b4530306aaaf-kube-api-access-9k5mw\") pod \"dnsmasq-dns-657f5df845-fq9wm\" (UID: \"d5160c40-0e83-445f-bf12-b4530306aaaf\") " pod="openstack/dnsmasq-dns-657f5df845-fq9wm" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.008430 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-ovsdbserver-sb\") pod \"dnsmasq-dns-657f5df845-fq9wm\" (UID: \"d5160c40-0e83-445f-bf12-b4530306aaaf\") " pod="openstack/dnsmasq-dns-657f5df845-fq9wm" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.008458 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-dns-swift-storage-0\") pod \"dnsmasq-dns-657f5df845-fq9wm\" (UID: \"d5160c40-0e83-445f-bf12-b4530306aaaf\") " pod="openstack/dnsmasq-dns-657f5df845-fq9wm" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.008496 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-ovsdbserver-nb\") pod \"dnsmasq-dns-657f5df845-fq9wm\" (UID: \"d5160c40-0e83-445f-bf12-b4530306aaaf\") " pod="openstack/dnsmasq-dns-657f5df845-fq9wm" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.055707 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.082507 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.107271 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.108916 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.114480 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-dns-swift-storage-0\") pod \"dnsmasq-dns-657f5df845-fq9wm\" (UID: \"d5160c40-0e83-445f-bf12-b4530306aaaf\") " pod="openstack/dnsmasq-dns-657f5df845-fq9wm" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.114545 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-ovsdbserver-nb\") pod \"dnsmasq-dns-657f5df845-fq9wm\" (UID: \"d5160c40-0e83-445f-bf12-b4530306aaaf\") " pod="openstack/dnsmasq-dns-657f5df845-fq9wm" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.114624 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-dns-svc\") pod \"dnsmasq-dns-657f5df845-fq9wm\" (UID: \"d5160c40-0e83-445f-bf12-b4530306aaaf\") " pod="openstack/dnsmasq-dns-657f5df845-fq9wm" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.116487 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-dns-swift-storage-0\") pod \"dnsmasq-dns-657f5df845-fq9wm\" (UID: \"d5160c40-0e83-445f-bf12-b4530306aaaf\") " pod="openstack/dnsmasq-dns-657f5df845-fq9wm" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.120572 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-config\") pod \"dnsmasq-dns-657f5df845-fq9wm\" (UID: \"d5160c40-0e83-445f-bf12-b4530306aaaf\") " pod="openstack/dnsmasq-dns-657f5df845-fq9wm" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.120802 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k5mw\" (UniqueName: \"kubernetes.io/projected/d5160c40-0e83-445f-bf12-b4530306aaaf-kube-api-access-9k5mw\") pod \"dnsmasq-dns-657f5df845-fq9wm\" (UID: \"d5160c40-0e83-445f-bf12-b4530306aaaf\") " pod="openstack/dnsmasq-dns-657f5df845-fq9wm" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.120973 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-ovsdbserver-sb\") pod \"dnsmasq-dns-657f5df845-fq9wm\" (UID: \"d5160c40-0e83-445f-bf12-b4530306aaaf\") " pod="openstack/dnsmasq-dns-657f5df845-fq9wm" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.122217 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-ovsdbserver-sb\") pod \"dnsmasq-dns-657f5df845-fq9wm\" (UID: \"d5160c40-0e83-445f-bf12-b4530306aaaf\") " pod="openstack/dnsmasq-dns-657f5df845-fq9wm" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.123006 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-dns-svc\") pod \"dnsmasq-dns-657f5df845-fq9wm\" (UID: \"d5160c40-0e83-445f-bf12-b4530306aaaf\") " pod="openstack/dnsmasq-dns-657f5df845-fq9wm" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.123782 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-config\") pod \"dnsmasq-dns-657f5df845-fq9wm\" (UID: \"d5160c40-0e83-445f-bf12-b4530306aaaf\") " pod="openstack/dnsmasq-dns-657f5df845-fq9wm" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.124469 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.124561 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.126027 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-ovsdbserver-nb\") pod \"dnsmasq-dns-657f5df845-fq9wm\" (UID: \"d5160c40-0e83-445f-bf12-b4530306aaaf\") " pod="openstack/dnsmasq-dns-657f5df845-fq9wm" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.137501 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.147319 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.170386 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k5mw\" (UniqueName: \"kubernetes.io/projected/d5160c40-0e83-445f-bf12-b4530306aaaf-kube-api-access-9k5mw\") pod \"dnsmasq-dns-657f5df845-fq9wm\" (UID: \"d5160c40-0e83-445f-bf12-b4530306aaaf\") " pod="openstack/dnsmasq-dns-657f5df845-fq9wm" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.223285 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f81822cf-636b-4865-8ceb-e97e6a0f29c3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f81822cf-636b-4865-8ceb-e97e6a0f29c3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.223348 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzsnw\" (UniqueName: \"kubernetes.io/projected/f81822cf-636b-4865-8ceb-e97e6a0f29c3-kube-api-access-nzsnw\") pod \"nova-cell1-novncproxy-0\" (UID: \"f81822cf-636b-4865-8ceb-e97e6a0f29c3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.223375 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f81822cf-636b-4865-8ceb-e97e6a0f29c3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f81822cf-636b-4865-8ceb-e97e6a0f29c3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.223419 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f81822cf-636b-4865-8ceb-e97e6a0f29c3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f81822cf-636b-4865-8ceb-e97e6a0f29c3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.223444 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f81822cf-636b-4865-8ceb-e97e6a0f29c3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f81822cf-636b-4865-8ceb-e97e6a0f29c3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.243758 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-657f5df845-fq9wm" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.325455 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f81822cf-636b-4865-8ceb-e97e6a0f29c3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f81822cf-636b-4865-8ceb-e97e6a0f29c3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.325810 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzsnw\" (UniqueName: \"kubernetes.io/projected/f81822cf-636b-4865-8ceb-e97e6a0f29c3-kube-api-access-nzsnw\") pod \"nova-cell1-novncproxy-0\" (UID: \"f81822cf-636b-4865-8ceb-e97e6a0f29c3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.325868 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f81822cf-636b-4865-8ceb-e97e6a0f29c3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f81822cf-636b-4865-8ceb-e97e6a0f29c3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.325929 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f81822cf-636b-4865-8ceb-e97e6a0f29c3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f81822cf-636b-4865-8ceb-e97e6a0f29c3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.325953 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f81822cf-636b-4865-8ceb-e97e6a0f29c3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f81822cf-636b-4865-8ceb-e97e6a0f29c3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.331644 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f81822cf-636b-4865-8ceb-e97e6a0f29c3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f81822cf-636b-4865-8ceb-e97e6a0f29c3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.334237 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f81822cf-636b-4865-8ceb-e97e6a0f29c3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f81822cf-636b-4865-8ceb-e97e6a0f29c3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.337244 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f81822cf-636b-4865-8ceb-e97e6a0f29c3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f81822cf-636b-4865-8ceb-e97e6a0f29c3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.339562 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f81822cf-636b-4865-8ceb-e97e6a0f29c3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f81822cf-636b-4865-8ceb-e97e6a0f29c3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.351472 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzsnw\" (UniqueName: \"kubernetes.io/projected/f81822cf-636b-4865-8ceb-e97e6a0f29c3-kube-api-access-nzsnw\") pod \"nova-cell1-novncproxy-0\" (UID: \"f81822cf-636b-4865-8ceb-e97e6a0f29c3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.450378 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.858790 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-657f5df845-fq9wm"] Dec 06 06:50:12 crc kubenswrapper[4823]: W1206 06:50:12.860889 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5160c40_0e83_445f_bf12_b4530306aaaf.slice/crio-903e5a2bdc829027ba499ae5d81d33228f4edb304e51d6d075154e8aa6ba968c WatchSource:0}: Error finding container 903e5a2bdc829027ba499ae5d81d33228f4edb304e51d6d075154e8aa6ba968c: Status 404 returned error can't find the container with id 903e5a2bdc829027ba499ae5d81d33228f4edb304e51d6d075154e8aa6ba968c Dec 06 06:50:12 crc kubenswrapper[4823]: W1206 06:50:12.932142 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf81822cf_636b_4865_8ceb_e97e6a0f29c3.slice/crio-9cdf7ab9d5cbedab5f845d2b024ff85791480a33efdd23675493675aba456b90 WatchSource:0}: Error finding container 9cdf7ab9d5cbedab5f845d2b024ff85791480a33efdd23675493675aba456b90: Status 404 returned error can't find the container with id 9cdf7ab9d5cbedab5f845d2b024ff85791480a33efdd23675493675aba456b90 Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.943330 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 06:50:12 crc kubenswrapper[4823]: I1206 06:50:12.983992 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 06 06:50:13 crc kubenswrapper[4823]: I1206 06:50:13.169853 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="228fbc4e-4973-4671-a38c-b74baed39e11" path="/var/lib/kubelet/pods/228fbc4e-4973-4671-a38c-b74baed39e11/volumes" Dec 06 06:50:13 crc kubenswrapper[4823]: I1206 06:50:13.732128 4823 generic.go:334] "Generic (PLEG): container finished" podID="d5160c40-0e83-445f-bf12-b4530306aaaf" containerID="cd5255edae4d79d8fb53f18dc01a0c219e39c35f6a408d15a5b0dadae8c5b1d6" exitCode=0 Dec 06 06:50:13 crc kubenswrapper[4823]: I1206 06:50:13.732333 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-657f5df845-fq9wm" event={"ID":"d5160c40-0e83-445f-bf12-b4530306aaaf","Type":"ContainerDied","Data":"cd5255edae4d79d8fb53f18dc01a0c219e39c35f6a408d15a5b0dadae8c5b1d6"} Dec 06 06:50:13 crc kubenswrapper[4823]: I1206 06:50:13.732512 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-657f5df845-fq9wm" event={"ID":"d5160c40-0e83-445f-bf12-b4530306aaaf","Type":"ContainerStarted","Data":"903e5a2bdc829027ba499ae5d81d33228f4edb304e51d6d075154e8aa6ba968c"} Dec 06 06:50:13 crc kubenswrapper[4823]: I1206 06:50:13.736242 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f81822cf-636b-4865-8ceb-e97e6a0f29c3","Type":"ContainerStarted","Data":"0ad4c9d029caa826a7b3700b79a46dc29a4846bd2c3fba8b0033856c9d0a8c4a"} Dec 06 06:50:13 crc kubenswrapper[4823]: I1206 06:50:13.736314 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f81822cf-636b-4865-8ceb-e97e6a0f29c3","Type":"ContainerStarted","Data":"9cdf7ab9d5cbedab5f845d2b024ff85791480a33efdd23675493675aba456b90"} Dec 06 06:50:13 crc kubenswrapper[4823]: I1206 06:50:13.796874 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.79681324 podStartE2EDuration="1.79681324s" podCreationTimestamp="2025-12-06 06:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:50:13.795809871 +0000 UTC m=+1515.081561831" watchObservedRunningTime="2025-12-06 06:50:13.79681324 +0000 UTC m=+1515.082565210" Dec 06 06:50:14 crc kubenswrapper[4823]: I1206 06:50:14.749781 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-657f5df845-fq9wm" event={"ID":"d5160c40-0e83-445f-bf12-b4530306aaaf","Type":"ContainerStarted","Data":"80872e8a676300b792c7a704c56e4f8858e7092060cae17557facf47168c280d"} Dec 06 06:50:14 crc kubenswrapper[4823]: I1206 06:50:14.788450 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-657f5df845-fq9wm" podStartSLOduration=3.78842 podStartE2EDuration="3.78842s" podCreationTimestamp="2025-12-06 06:50:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:50:14.774880848 +0000 UTC m=+1516.060632808" watchObservedRunningTime="2025-12-06 06:50:14.78842 +0000 UTC m=+1516.074171960" Dec 06 06:50:15 crc kubenswrapper[4823]: I1206 06:50:15.003048 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 06:50:15 crc kubenswrapper[4823]: I1206 06:50:15.003385 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0a9aa958-25be-4e23-a51d-f08b175ce5c6" containerName="nova-api-log" containerID="cri-o://1937f979cfeb965f1f8341bdb85b27425d1ac9db2c6cc0362815110a07ded1e4" gracePeriod=30 Dec 06 06:50:15 crc kubenswrapper[4823]: I1206 06:50:15.003496 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0a9aa958-25be-4e23-a51d-f08b175ce5c6" containerName="nova-api-api" containerID="cri-o://babe1ce126ec96ce515734f650f9fd130ac4c1093bac34c09e6c989b2c48df56" gracePeriod=30 Dec 06 06:50:15 crc kubenswrapper[4823]: I1206 06:50:15.614405 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:50:15 crc kubenswrapper[4823]: I1206 06:50:15.615143 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7b2649e-94bb-49cd-82c4-347a3dc94606" containerName="proxy-httpd" containerID="cri-o://5a9ba00dc5364d1acbdc5e3a865ebd864ad0d02c64cd2b5b2c18448a8f155a85" gracePeriod=30 Dec 06 06:50:15 crc kubenswrapper[4823]: I1206 06:50:15.615163 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7b2649e-94bb-49cd-82c4-347a3dc94606" containerName="sg-core" containerID="cri-o://87fe23657bf784b1364b53b67b72861e35afefdbe93a84bd8bd732d799c33e11" gracePeriod=30 Dec 06 06:50:15 crc kubenswrapper[4823]: I1206 06:50:15.615069 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7b2649e-94bb-49cd-82c4-347a3dc94606" containerName="ceilometer-central-agent" containerID="cri-o://a6b0e7ca9ece2cb48926fd293a2bc1e5e0ff0a1b7874c8a6f9a7e53d61926856" gracePeriod=30 Dec 06 06:50:15 crc kubenswrapper[4823]: I1206 06:50:15.615265 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7b2649e-94bb-49cd-82c4-347a3dc94606" containerName="ceilometer-notification-agent" containerID="cri-o://d2ca7d7231cb090d510b1455f00e58d960594cc2ba758678c27dc1c195dfb97c" gracePeriod=30 Dec 06 06:50:15 crc kubenswrapper[4823]: I1206 06:50:15.769013 4823 generic.go:334] "Generic (PLEG): container finished" podID="0a9aa958-25be-4e23-a51d-f08b175ce5c6" containerID="1937f979cfeb965f1f8341bdb85b27425d1ac9db2c6cc0362815110a07ded1e4" exitCode=143 Dec 06 06:50:15 crc kubenswrapper[4823]: I1206 06:50:15.769781 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a9aa958-25be-4e23-a51d-f08b175ce5c6","Type":"ContainerDied","Data":"1937f979cfeb965f1f8341bdb85b27425d1ac9db2c6cc0362815110a07ded1e4"} Dec 06 06:50:15 crc kubenswrapper[4823]: I1206 06:50:15.793024 4823 generic.go:334] "Generic (PLEG): container finished" podID="a7b2649e-94bb-49cd-82c4-347a3dc94606" containerID="87fe23657bf784b1364b53b67b72861e35afefdbe93a84bd8bd732d799c33e11" exitCode=2 Dec 06 06:50:15 crc kubenswrapper[4823]: I1206 06:50:15.794229 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7b2649e-94bb-49cd-82c4-347a3dc94606","Type":"ContainerDied","Data":"87fe23657bf784b1364b53b67b72861e35afefdbe93a84bd8bd732d799c33e11"} Dec 06 06:50:15 crc kubenswrapper[4823]: I1206 06:50:15.794281 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-657f5df845-fq9wm" Dec 06 06:50:16 crc kubenswrapper[4823]: I1206 06:50:16.811502 4823 generic.go:334] "Generic (PLEG): container finished" podID="a7b2649e-94bb-49cd-82c4-347a3dc94606" containerID="5a9ba00dc5364d1acbdc5e3a865ebd864ad0d02c64cd2b5b2c18448a8f155a85" exitCode=0 Dec 06 06:50:16 crc kubenswrapper[4823]: I1206 06:50:16.811881 4823 generic.go:334] "Generic (PLEG): container finished" podID="a7b2649e-94bb-49cd-82c4-347a3dc94606" containerID="d2ca7d7231cb090d510b1455f00e58d960594cc2ba758678c27dc1c195dfb97c" exitCode=0 Dec 06 06:50:16 crc kubenswrapper[4823]: I1206 06:50:16.811951 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7b2649e-94bb-49cd-82c4-347a3dc94606","Type":"ContainerDied","Data":"5a9ba00dc5364d1acbdc5e3a865ebd864ad0d02c64cd2b5b2c18448a8f155a85"} Dec 06 06:50:16 crc kubenswrapper[4823]: I1206 06:50:16.811987 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7b2649e-94bb-49cd-82c4-347a3dc94606","Type":"ContainerDied","Data":"d2ca7d7231cb090d510b1455f00e58d960594cc2ba758678c27dc1c195dfb97c"} Dec 06 06:50:16 crc kubenswrapper[4823]: I1206 06:50:16.819324 4823 generic.go:334] "Generic (PLEG): container finished" podID="0a9aa958-25be-4e23-a51d-f08b175ce5c6" containerID="babe1ce126ec96ce515734f650f9fd130ac4c1093bac34c09e6c989b2c48df56" exitCode=0 Dec 06 06:50:16 crc kubenswrapper[4823]: I1206 06:50:16.820585 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a9aa958-25be-4e23-a51d-f08b175ce5c6","Type":"ContainerDied","Data":"babe1ce126ec96ce515734f650f9fd130ac4c1093bac34c09e6c989b2c48df56"} Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.319269 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.383928 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a9aa958-25be-4e23-a51d-f08b175ce5c6-combined-ca-bundle\") pod \"0a9aa958-25be-4e23-a51d-f08b175ce5c6\" (UID: \"0a9aa958-25be-4e23-a51d-f08b175ce5c6\") " Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.384059 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a9aa958-25be-4e23-a51d-f08b175ce5c6-logs\") pod \"0a9aa958-25be-4e23-a51d-f08b175ce5c6\" (UID: \"0a9aa958-25be-4e23-a51d-f08b175ce5c6\") " Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.384524 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a9aa958-25be-4e23-a51d-f08b175ce5c6-logs" (OuterVolumeSpecName: "logs") pod "0a9aa958-25be-4e23-a51d-f08b175ce5c6" (UID: "0a9aa958-25be-4e23-a51d-f08b175ce5c6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.384784 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a9aa958-25be-4e23-a51d-f08b175ce5c6-config-data\") pod \"0a9aa958-25be-4e23-a51d-f08b175ce5c6\" (UID: \"0a9aa958-25be-4e23-a51d-f08b175ce5c6\") " Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.385149 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skng5\" (UniqueName: \"kubernetes.io/projected/0a9aa958-25be-4e23-a51d-f08b175ce5c6-kube-api-access-skng5\") pod \"0a9aa958-25be-4e23-a51d-f08b175ce5c6\" (UID: \"0a9aa958-25be-4e23-a51d-f08b175ce5c6\") " Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.386343 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a9aa958-25be-4e23-a51d-f08b175ce5c6-logs\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.406254 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a9aa958-25be-4e23-a51d-f08b175ce5c6-kube-api-access-skng5" (OuterVolumeSpecName: "kube-api-access-skng5") pod "0a9aa958-25be-4e23-a51d-f08b175ce5c6" (UID: "0a9aa958-25be-4e23-a51d-f08b175ce5c6"). InnerVolumeSpecName "kube-api-access-skng5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.428782 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a9aa958-25be-4e23-a51d-f08b175ce5c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a9aa958-25be-4e23-a51d-f08b175ce5c6" (UID: "0a9aa958-25be-4e23-a51d-f08b175ce5c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.451956 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.488625 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a9aa958-25be-4e23-a51d-f08b175ce5c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.488726 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skng5\" (UniqueName: \"kubernetes.io/projected/0a9aa958-25be-4e23-a51d-f08b175ce5c6-kube-api-access-skng5\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.488836 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a9aa958-25be-4e23-a51d-f08b175ce5c6-config-data" (OuterVolumeSpecName: "config-data") pod "0a9aa958-25be-4e23-a51d-f08b175ce5c6" (UID: "0a9aa958-25be-4e23-a51d-f08b175ce5c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.591052 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a9aa958-25be-4e23-a51d-f08b175ce5c6-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.834785 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a9aa958-25be-4e23-a51d-f08b175ce5c6","Type":"ContainerDied","Data":"3c8e67606534e4df3648203fc70bc0dbc40aa3f9980f047174b53fe9e7c19e40"} Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.834872 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.835102 4823 scope.go:117] "RemoveContainer" containerID="babe1ce126ec96ce515734f650f9fd130ac4c1093bac34c09e6c989b2c48df56" Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.860865 4823 scope.go:117] "RemoveContainer" containerID="1937f979cfeb965f1f8341bdb85b27425d1ac9db2c6cc0362815110a07ded1e4" Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.876617 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.889462 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.920848 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 06 06:50:17 crc kubenswrapper[4823]: E1206 06:50:17.921470 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a9aa958-25be-4e23-a51d-f08b175ce5c6" containerName="nova-api-api" Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.921492 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a9aa958-25be-4e23-a51d-f08b175ce5c6" containerName="nova-api-api" Dec 06 06:50:17 crc kubenswrapper[4823]: E1206 06:50:17.921522 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a9aa958-25be-4e23-a51d-f08b175ce5c6" containerName="nova-api-log" Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.921531 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a9aa958-25be-4e23-a51d-f08b175ce5c6" containerName="nova-api-log" Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.921756 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a9aa958-25be-4e23-a51d-f08b175ce5c6" containerName="nova-api-log" Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.921791 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a9aa958-25be-4e23-a51d-f08b175ce5c6" containerName="nova-api-api" Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.923125 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.928488 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.928557 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.928690 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 06 06:50:17 crc kubenswrapper[4823]: I1206 06:50:17.940626 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 06:50:18 crc kubenswrapper[4823]: I1206 06:50:18.009545 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdf57\" (UniqueName: \"kubernetes.io/projected/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-kube-api-access-pdf57\") pod \"nova-api-0\" (UID: \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\") " pod="openstack/nova-api-0" Dec 06 06:50:18 crc kubenswrapper[4823]: I1206 06:50:18.009607 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-config-data\") pod \"nova-api-0\" (UID: \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\") " pod="openstack/nova-api-0" Dec 06 06:50:18 crc kubenswrapper[4823]: I1206 06:50:18.009643 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\") " pod="openstack/nova-api-0" Dec 06 06:50:18 crc kubenswrapper[4823]: I1206 06:50:18.009743 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\") " pod="openstack/nova-api-0" Dec 06 06:50:18 crc kubenswrapper[4823]: I1206 06:50:18.009927 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-logs\") pod \"nova-api-0\" (UID: \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\") " pod="openstack/nova-api-0" Dec 06 06:50:18 crc kubenswrapper[4823]: I1206 06:50:18.010091 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-public-tls-certs\") pod \"nova-api-0\" (UID: \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\") " pod="openstack/nova-api-0" Dec 06 06:50:18 crc kubenswrapper[4823]: I1206 06:50:18.110852 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-logs\") pod \"nova-api-0\" (UID: \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\") " pod="openstack/nova-api-0" Dec 06 06:50:18 crc kubenswrapper[4823]: I1206 06:50:18.110926 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-public-tls-certs\") pod \"nova-api-0\" (UID: \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\") " pod="openstack/nova-api-0" Dec 06 06:50:18 crc kubenswrapper[4823]: I1206 06:50:18.110992 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdf57\" (UniqueName: \"kubernetes.io/projected/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-kube-api-access-pdf57\") pod \"nova-api-0\" (UID: \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\") " pod="openstack/nova-api-0" Dec 06 06:50:18 crc kubenswrapper[4823]: I1206 06:50:18.111040 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-config-data\") pod \"nova-api-0\" (UID: \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\") " pod="openstack/nova-api-0" Dec 06 06:50:18 crc kubenswrapper[4823]: I1206 06:50:18.111251 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-logs\") pod \"nova-api-0\" (UID: \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\") " pod="openstack/nova-api-0" Dec 06 06:50:18 crc kubenswrapper[4823]: I1206 06:50:18.111418 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\") " pod="openstack/nova-api-0" Dec 06 06:50:18 crc kubenswrapper[4823]: I1206 06:50:18.111812 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\") " pod="openstack/nova-api-0" Dec 06 06:50:18 crc kubenswrapper[4823]: I1206 06:50:18.117335 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-public-tls-certs\") pod \"nova-api-0\" (UID: \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\") " pod="openstack/nova-api-0" Dec 06 06:50:18 crc kubenswrapper[4823]: I1206 06:50:18.117415 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-config-data\") pod \"nova-api-0\" (UID: \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\") " pod="openstack/nova-api-0" Dec 06 06:50:18 crc kubenswrapper[4823]: I1206 06:50:18.118246 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\") " pod="openstack/nova-api-0" Dec 06 06:50:18 crc kubenswrapper[4823]: I1206 06:50:18.118312 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\") " pod="openstack/nova-api-0" Dec 06 06:50:18 crc kubenswrapper[4823]: I1206 06:50:18.141401 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdf57\" (UniqueName: \"kubernetes.io/projected/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-kube-api-access-pdf57\") pod \"nova-api-0\" (UID: \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\") " pod="openstack/nova-api-0" Dec 06 06:50:18 crc kubenswrapper[4823]: I1206 06:50:18.251271 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 06:50:18 crc kubenswrapper[4823]: I1206 06:50:18.794090 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 06:50:18 crc kubenswrapper[4823]: W1206 06:50:18.799498 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d4bf3b7_41e0_4faa_bf7b_4e42f6ee2585.slice/crio-b027379f7e58a989633fd418d9368ef5fead0940d850e8ae622f66017910bf27 WatchSource:0}: Error finding container b027379f7e58a989633fd418d9368ef5fead0940d850e8ae622f66017910bf27: Status 404 returned error can't find the container with id b027379f7e58a989633fd418d9368ef5fead0940d850e8ae622f66017910bf27 Dec 06 06:50:18 crc kubenswrapper[4823]: I1206 06:50:18.857288 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585","Type":"ContainerStarted","Data":"b027379f7e58a989633fd418d9368ef5fead0940d850e8ae622f66017910bf27"} Dec 06 06:50:19 crc kubenswrapper[4823]: I1206 06:50:19.172570 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a9aa958-25be-4e23-a51d-f08b175ce5c6" path="/var/lib/kubelet/pods/0a9aa958-25be-4e23-a51d-f08b175ce5c6/volumes" Dec 06 06:50:19 crc kubenswrapper[4823]: I1206 06:50:19.895408 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7b2649e-94bb-49cd-82c4-347a3dc94606","Type":"ContainerDied","Data":"a6b0e7ca9ece2cb48926fd293a2bc1e5e0ff0a1b7874c8a6f9a7e53d61926856"} Dec 06 06:50:19 crc kubenswrapper[4823]: I1206 06:50:19.895403 4823 generic.go:334] "Generic (PLEG): container finished" podID="a7b2649e-94bb-49cd-82c4-347a3dc94606" containerID="a6b0e7ca9ece2cb48926fd293a2bc1e5e0ff0a1b7874c8a6f9a7e53d61926856" exitCode=0 Dec 06 06:50:19 crc kubenswrapper[4823]: I1206 06:50:19.899639 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585","Type":"ContainerStarted","Data":"48b1ee255e4e9cd71f273733ac3dd64a4cca3bd1b9771a5d3e045982c1355ec5"} Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.106632 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.161908 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7b2649e-94bb-49cd-82c4-347a3dc94606-log-httpd\") pod \"a7b2649e-94bb-49cd-82c4-347a3dc94606\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.161946 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7b2649e-94bb-49cd-82c4-347a3dc94606-run-httpd\") pod \"a7b2649e-94bb-49cd-82c4-347a3dc94606\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.162055 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qdwf\" (UniqueName: \"kubernetes.io/projected/a7b2649e-94bb-49cd-82c4-347a3dc94606-kube-api-access-7qdwf\") pod \"a7b2649e-94bb-49cd-82c4-347a3dc94606\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.162088 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-combined-ca-bundle\") pod \"a7b2649e-94bb-49cd-82c4-347a3dc94606\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.162163 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-ceilometer-tls-certs\") pod \"a7b2649e-94bb-49cd-82c4-347a3dc94606\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.162231 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-sg-core-conf-yaml\") pod \"a7b2649e-94bb-49cd-82c4-347a3dc94606\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.162327 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-config-data\") pod \"a7b2649e-94bb-49cd-82c4-347a3dc94606\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.162361 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-scripts\") pod \"a7b2649e-94bb-49cd-82c4-347a3dc94606\" (UID: \"a7b2649e-94bb-49cd-82c4-347a3dc94606\") " Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.162410 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7b2649e-94bb-49cd-82c4-347a3dc94606-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a7b2649e-94bb-49cd-82c4-347a3dc94606" (UID: "a7b2649e-94bb-49cd-82c4-347a3dc94606"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.162910 4823 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7b2649e-94bb-49cd-82c4-347a3dc94606-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.163769 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7b2649e-94bb-49cd-82c4-347a3dc94606-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a7b2649e-94bb-49cd-82c4-347a3dc94606" (UID: "a7b2649e-94bb-49cd-82c4-347a3dc94606"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.169968 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-scripts" (OuterVolumeSpecName: "scripts") pod "a7b2649e-94bb-49cd-82c4-347a3dc94606" (UID: "a7b2649e-94bb-49cd-82c4-347a3dc94606"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.172863 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7b2649e-94bb-49cd-82c4-347a3dc94606-kube-api-access-7qdwf" (OuterVolumeSpecName: "kube-api-access-7qdwf") pod "a7b2649e-94bb-49cd-82c4-347a3dc94606" (UID: "a7b2649e-94bb-49cd-82c4-347a3dc94606"). InnerVolumeSpecName "kube-api-access-7qdwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.249247 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a7b2649e-94bb-49cd-82c4-347a3dc94606" (UID: "a7b2649e-94bb-49cd-82c4-347a3dc94606"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.265424 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.265468 4823 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7b2649e-94bb-49cd-82c4-347a3dc94606-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.265483 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qdwf\" (UniqueName: \"kubernetes.io/projected/a7b2649e-94bb-49cd-82c4-347a3dc94606-kube-api-access-7qdwf\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.265547 4823 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.296896 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a7b2649e-94bb-49cd-82c4-347a3dc94606" (UID: "a7b2649e-94bb-49cd-82c4-347a3dc94606"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.322258 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7b2649e-94bb-49cd-82c4-347a3dc94606" (UID: "a7b2649e-94bb-49cd-82c4-347a3dc94606"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.367603 4823 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.367699 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.375763 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-config-data" (OuterVolumeSpecName: "config-data") pod "a7b2649e-94bb-49cd-82c4-347a3dc94606" (UID: "a7b2649e-94bb-49cd-82c4-347a3dc94606"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.471119 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7b2649e-94bb-49cd-82c4-347a3dc94606-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.913103 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585","Type":"ContainerStarted","Data":"e2025e29ba2c9006daa292d642bcbf382e312f4c8b946c327614d901d2e67fc8"} Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.917820 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7b2649e-94bb-49cd-82c4-347a3dc94606","Type":"ContainerDied","Data":"11c5c7e955d71db4835f31857bee538fa60a91a311af3d7819ed903b8fad29d6"} Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.918116 4823 scope.go:117] "RemoveContainer" containerID="5a9ba00dc5364d1acbdc5e3a865ebd864ad0d02c64cd2b5b2c18448a8f155a85" Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.917943 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.939937 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.939903226 podStartE2EDuration="3.939903226s" podCreationTimestamp="2025-12-06 06:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:50:20.933754068 +0000 UTC m=+1522.219506048" watchObservedRunningTime="2025-12-06 06:50:20.939903226 +0000 UTC m=+1522.225655186" Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.967427 4823 scope.go:117] "RemoveContainer" containerID="87fe23657bf784b1364b53b67b72861e35afefdbe93a84bd8bd732d799c33e11" Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.971721 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:50:20 crc kubenswrapper[4823]: I1206 06:50:20.985947 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.002354 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:50:21 crc kubenswrapper[4823]: E1206 06:50:21.002950 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7b2649e-94bb-49cd-82c4-347a3dc94606" containerName="ceilometer-notification-agent" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.002978 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7b2649e-94bb-49cd-82c4-347a3dc94606" containerName="ceilometer-notification-agent" Dec 06 06:50:21 crc kubenswrapper[4823]: E1206 06:50:21.003002 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7b2649e-94bb-49cd-82c4-347a3dc94606" containerName="proxy-httpd" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.003012 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7b2649e-94bb-49cd-82c4-347a3dc94606" containerName="proxy-httpd" Dec 06 06:50:21 crc kubenswrapper[4823]: E1206 06:50:21.003051 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7b2649e-94bb-49cd-82c4-347a3dc94606" containerName="ceilometer-central-agent" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.003060 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7b2649e-94bb-49cd-82c4-347a3dc94606" containerName="ceilometer-central-agent" Dec 06 06:50:21 crc kubenswrapper[4823]: E1206 06:50:21.003076 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7b2649e-94bb-49cd-82c4-347a3dc94606" containerName="sg-core" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.003085 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7b2649e-94bb-49cd-82c4-347a3dc94606" containerName="sg-core" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.003354 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7b2649e-94bb-49cd-82c4-347a3dc94606" containerName="ceilometer-central-agent" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.003383 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7b2649e-94bb-49cd-82c4-347a3dc94606" containerName="sg-core" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.003405 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7b2649e-94bb-49cd-82c4-347a3dc94606" containerName="proxy-httpd" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.003416 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7b2649e-94bb-49cd-82c4-347a3dc94606" containerName="ceilometer-notification-agent" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.006071 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.011234 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.011358 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.011506 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.027417 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.038027 4823 scope.go:117] "RemoveContainer" containerID="d2ca7d7231cb090d510b1455f00e58d960594cc2ba758678c27dc1c195dfb97c" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.059579 4823 scope.go:117] "RemoveContainer" containerID="a6b0e7ca9ece2cb48926fd293a2bc1e5e0ff0a1b7874c8a6f9a7e53d61926856" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.083589 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e1d1477-a236-458f-9b57-1d74fc56a92d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8e1d1477-a236-458f-9b57-1d74fc56a92d\") " pod="openstack/ceilometer-0" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.083653 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e1d1477-a236-458f-9b57-1d74fc56a92d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8e1d1477-a236-458f-9b57-1d74fc56a92d\") " pod="openstack/ceilometer-0" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.083770 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e1d1477-a236-458f-9b57-1d74fc56a92d-run-httpd\") pod \"ceilometer-0\" (UID: \"8e1d1477-a236-458f-9b57-1d74fc56a92d\") " pod="openstack/ceilometer-0" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.083826 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e1d1477-a236-458f-9b57-1d74fc56a92d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8e1d1477-a236-458f-9b57-1d74fc56a92d\") " pod="openstack/ceilometer-0" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.083857 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e1d1477-a236-458f-9b57-1d74fc56a92d-config-data\") pod \"ceilometer-0\" (UID: \"8e1d1477-a236-458f-9b57-1d74fc56a92d\") " pod="openstack/ceilometer-0" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.084023 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jtt6\" (UniqueName: \"kubernetes.io/projected/8e1d1477-a236-458f-9b57-1d74fc56a92d-kube-api-access-7jtt6\") pod \"ceilometer-0\" (UID: \"8e1d1477-a236-458f-9b57-1d74fc56a92d\") " pod="openstack/ceilometer-0" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.084330 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e1d1477-a236-458f-9b57-1d74fc56a92d-log-httpd\") pod \"ceilometer-0\" (UID: \"8e1d1477-a236-458f-9b57-1d74fc56a92d\") " pod="openstack/ceilometer-0" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.084459 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e1d1477-a236-458f-9b57-1d74fc56a92d-scripts\") pod \"ceilometer-0\" (UID: \"8e1d1477-a236-458f-9b57-1d74fc56a92d\") " pod="openstack/ceilometer-0" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.160480 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7b2649e-94bb-49cd-82c4-347a3dc94606" path="/var/lib/kubelet/pods/a7b2649e-94bb-49cd-82c4-347a3dc94606/volumes" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.186364 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e1d1477-a236-458f-9b57-1d74fc56a92d-log-httpd\") pod \"ceilometer-0\" (UID: \"8e1d1477-a236-458f-9b57-1d74fc56a92d\") " pod="openstack/ceilometer-0" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.186432 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e1d1477-a236-458f-9b57-1d74fc56a92d-scripts\") pod \"ceilometer-0\" (UID: \"8e1d1477-a236-458f-9b57-1d74fc56a92d\") " pod="openstack/ceilometer-0" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.186537 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e1d1477-a236-458f-9b57-1d74fc56a92d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8e1d1477-a236-458f-9b57-1d74fc56a92d\") " pod="openstack/ceilometer-0" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.186557 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e1d1477-a236-458f-9b57-1d74fc56a92d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8e1d1477-a236-458f-9b57-1d74fc56a92d\") " pod="openstack/ceilometer-0" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.186601 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e1d1477-a236-458f-9b57-1d74fc56a92d-run-httpd\") pod \"ceilometer-0\" (UID: \"8e1d1477-a236-458f-9b57-1d74fc56a92d\") " pod="openstack/ceilometer-0" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.186636 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e1d1477-a236-458f-9b57-1d74fc56a92d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8e1d1477-a236-458f-9b57-1d74fc56a92d\") " pod="openstack/ceilometer-0" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.186652 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e1d1477-a236-458f-9b57-1d74fc56a92d-config-data\") pod \"ceilometer-0\" (UID: \"8e1d1477-a236-458f-9b57-1d74fc56a92d\") " pod="openstack/ceilometer-0" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.186689 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jtt6\" (UniqueName: \"kubernetes.io/projected/8e1d1477-a236-458f-9b57-1d74fc56a92d-kube-api-access-7jtt6\") pod \"ceilometer-0\" (UID: \"8e1d1477-a236-458f-9b57-1d74fc56a92d\") " pod="openstack/ceilometer-0" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.187036 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e1d1477-a236-458f-9b57-1d74fc56a92d-log-httpd\") pod \"ceilometer-0\" (UID: \"8e1d1477-a236-458f-9b57-1d74fc56a92d\") " pod="openstack/ceilometer-0" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.187376 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e1d1477-a236-458f-9b57-1d74fc56a92d-run-httpd\") pod \"ceilometer-0\" (UID: \"8e1d1477-a236-458f-9b57-1d74fc56a92d\") " pod="openstack/ceilometer-0" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.190766 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e1d1477-a236-458f-9b57-1d74fc56a92d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8e1d1477-a236-458f-9b57-1d74fc56a92d\") " pod="openstack/ceilometer-0" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.191339 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e1d1477-a236-458f-9b57-1d74fc56a92d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8e1d1477-a236-458f-9b57-1d74fc56a92d\") " pod="openstack/ceilometer-0" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.195238 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e1d1477-a236-458f-9b57-1d74fc56a92d-config-data\") pod \"ceilometer-0\" (UID: \"8e1d1477-a236-458f-9b57-1d74fc56a92d\") " pod="openstack/ceilometer-0" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.195853 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e1d1477-a236-458f-9b57-1d74fc56a92d-scripts\") pod \"ceilometer-0\" (UID: \"8e1d1477-a236-458f-9b57-1d74fc56a92d\") " pod="openstack/ceilometer-0" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.195879 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e1d1477-a236-458f-9b57-1d74fc56a92d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8e1d1477-a236-458f-9b57-1d74fc56a92d\") " pod="openstack/ceilometer-0" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.205737 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jtt6\" (UniqueName: \"kubernetes.io/projected/8e1d1477-a236-458f-9b57-1d74fc56a92d-kube-api-access-7jtt6\") pod \"ceilometer-0\" (UID: \"8e1d1477-a236-458f-9b57-1d74fc56a92d\") " pod="openstack/ceilometer-0" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.327517 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.817716 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 06:50:21 crc kubenswrapper[4823]: I1206 06:50:21.934373 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e1d1477-a236-458f-9b57-1d74fc56a92d","Type":"ContainerStarted","Data":"54851a50bd7748ac7e53e54603f33339296e834c531ce5ac42976822d09c3642"} Dec 06 06:50:22 crc kubenswrapper[4823]: I1206 06:50:22.245767 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-657f5df845-fq9wm" Dec 06 06:50:22 crc kubenswrapper[4823]: I1206 06:50:22.363703 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d46bc7bf9-mxq8f"] Dec 06 06:50:22 crc kubenswrapper[4823]: I1206 06:50:22.364045 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" podUID="d601167c-c425-40f5-ad1d-9cf563033888" containerName="dnsmasq-dns" containerID="cri-o://116de26f1fcdbdb30a27bac3e27e6a2cddc81a132a2e1fdc19500fd12a751e43" gracePeriod=10 Dec 06 06:50:22 crc kubenswrapper[4823]: I1206 06:50:22.457542 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:50:22 crc kubenswrapper[4823]: I1206 06:50:22.493491 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:50:22 crc kubenswrapper[4823]: I1206 06:50:22.968193 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e1d1477-a236-458f-9b57-1d74fc56a92d","Type":"ContainerStarted","Data":"16b69c1826936ff11b0cbbe3222745720336fc58ad5b20d74e4196b908fae2a6"} Dec 06 06:50:22 crc kubenswrapper[4823]: I1206 06:50:22.992758 4823 generic.go:334] "Generic (PLEG): container finished" podID="d601167c-c425-40f5-ad1d-9cf563033888" containerID="116de26f1fcdbdb30a27bac3e27e6a2cddc81a132a2e1fdc19500fd12a751e43" exitCode=0 Dec 06 06:50:22 crc kubenswrapper[4823]: I1206 06:50:22.996443 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" event={"ID":"d601167c-c425-40f5-ad1d-9cf563033888","Type":"ContainerDied","Data":"116de26f1fcdbdb30a27bac3e27e6a2cddc81a132a2e1fdc19500fd12a751e43"} Dec 06 06:50:22 crc kubenswrapper[4823]: I1206 06:50:22.996497 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" event={"ID":"d601167c-c425-40f5-ad1d-9cf563033888","Type":"ContainerDied","Data":"d25d253c06a66c08ce32dcdc4b6874a9b5d1e66c76d1acfd13f6c798292dd1ed"} Dec 06 06:50:22 crc kubenswrapper[4823]: I1206 06:50:22.996515 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d25d253c06a66c08ce32dcdc4b6874a9b5d1e66c76d1acfd13f6c798292dd1ed" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.028832 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.038799 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.208526 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-config\") pod \"d601167c-c425-40f5-ad1d-9cf563033888\" (UID: \"d601167c-c425-40f5-ad1d-9cf563033888\") " Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.208752 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfmkd\" (UniqueName: \"kubernetes.io/projected/d601167c-c425-40f5-ad1d-9cf563033888-kube-api-access-dfmkd\") pod \"d601167c-c425-40f5-ad1d-9cf563033888\" (UID: \"d601167c-c425-40f5-ad1d-9cf563033888\") " Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.208886 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-dns-swift-storage-0\") pod \"d601167c-c425-40f5-ad1d-9cf563033888\" (UID: \"d601167c-c425-40f5-ad1d-9cf563033888\") " Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.208943 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-dns-svc\") pod \"d601167c-c425-40f5-ad1d-9cf563033888\" (UID: \"d601167c-c425-40f5-ad1d-9cf563033888\") " Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.208970 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-ovsdbserver-sb\") pod \"d601167c-c425-40f5-ad1d-9cf563033888\" (UID: \"d601167c-c425-40f5-ad1d-9cf563033888\") " Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.209060 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-ovsdbserver-nb\") pod \"d601167c-c425-40f5-ad1d-9cf563033888\" (UID: \"d601167c-c425-40f5-ad1d-9cf563033888\") " Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.228953 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d601167c-c425-40f5-ad1d-9cf563033888-kube-api-access-dfmkd" (OuterVolumeSpecName: "kube-api-access-dfmkd") pod "d601167c-c425-40f5-ad1d-9cf563033888" (UID: "d601167c-c425-40f5-ad1d-9cf563033888"). InnerVolumeSpecName "kube-api-access-dfmkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.312505 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfmkd\" (UniqueName: \"kubernetes.io/projected/d601167c-c425-40f5-ad1d-9cf563033888-kube-api-access-dfmkd\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.386284 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d601167c-c425-40f5-ad1d-9cf563033888" (UID: "d601167c-c425-40f5-ad1d-9cf563033888"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.395766 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d601167c-c425-40f5-ad1d-9cf563033888" (UID: "d601167c-c425-40f5-ad1d-9cf563033888"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.416294 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d601167c-c425-40f5-ad1d-9cf563033888" (UID: "d601167c-c425-40f5-ad1d-9cf563033888"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.416439 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d601167c-c425-40f5-ad1d-9cf563033888" (UID: "d601167c-c425-40f5-ad1d-9cf563033888"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.421455 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-config" (OuterVolumeSpecName: "config") pod "d601167c-c425-40f5-ad1d-9cf563033888" (UID: "d601167c-c425-40f5-ad1d-9cf563033888"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.423190 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.423224 4823 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.423240 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.423260 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.423276 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d601167c-c425-40f5-ad1d-9cf563033888-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.425733 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-4qhhv"] Dec 06 06:50:23 crc kubenswrapper[4823]: E1206 06:50:23.426348 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d601167c-c425-40f5-ad1d-9cf563033888" containerName="dnsmasq-dns" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.426389 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d601167c-c425-40f5-ad1d-9cf563033888" containerName="dnsmasq-dns" Dec 06 06:50:23 crc kubenswrapper[4823]: E1206 06:50:23.426411 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d601167c-c425-40f5-ad1d-9cf563033888" containerName="init" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.426418 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d601167c-c425-40f5-ad1d-9cf563033888" containerName="init" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.426678 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d601167c-c425-40f5-ad1d-9cf563033888" containerName="dnsmasq-dns" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.427729 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4qhhv" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.431139 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.431940 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.445138 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-4qhhv"] Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.525244 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d03281d-e24f-4882-9428-1e3e30ca70ae-config-data\") pod \"nova-cell1-cell-mapping-4qhhv\" (UID: \"4d03281d-e24f-4882-9428-1e3e30ca70ae\") " pod="openstack/nova-cell1-cell-mapping-4qhhv" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.525314 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d03281d-e24f-4882-9428-1e3e30ca70ae-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4qhhv\" (UID: \"4d03281d-e24f-4882-9428-1e3e30ca70ae\") " pod="openstack/nova-cell1-cell-mapping-4qhhv" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.525403 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d03281d-e24f-4882-9428-1e3e30ca70ae-scripts\") pod \"nova-cell1-cell-mapping-4qhhv\" (UID: \"4d03281d-e24f-4882-9428-1e3e30ca70ae\") " pod="openstack/nova-cell1-cell-mapping-4qhhv" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.525530 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qctk2\" (UniqueName: \"kubernetes.io/projected/4d03281d-e24f-4882-9428-1e3e30ca70ae-kube-api-access-qctk2\") pod \"nova-cell1-cell-mapping-4qhhv\" (UID: \"4d03281d-e24f-4882-9428-1e3e30ca70ae\") " pod="openstack/nova-cell1-cell-mapping-4qhhv" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.627168 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d03281d-e24f-4882-9428-1e3e30ca70ae-config-data\") pod \"nova-cell1-cell-mapping-4qhhv\" (UID: \"4d03281d-e24f-4882-9428-1e3e30ca70ae\") " pod="openstack/nova-cell1-cell-mapping-4qhhv" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.627552 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d03281d-e24f-4882-9428-1e3e30ca70ae-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4qhhv\" (UID: \"4d03281d-e24f-4882-9428-1e3e30ca70ae\") " pod="openstack/nova-cell1-cell-mapping-4qhhv" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.627678 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d03281d-e24f-4882-9428-1e3e30ca70ae-scripts\") pod \"nova-cell1-cell-mapping-4qhhv\" (UID: \"4d03281d-e24f-4882-9428-1e3e30ca70ae\") " pod="openstack/nova-cell1-cell-mapping-4qhhv" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.627853 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qctk2\" (UniqueName: \"kubernetes.io/projected/4d03281d-e24f-4882-9428-1e3e30ca70ae-kube-api-access-qctk2\") pod \"nova-cell1-cell-mapping-4qhhv\" (UID: \"4d03281d-e24f-4882-9428-1e3e30ca70ae\") " pod="openstack/nova-cell1-cell-mapping-4qhhv" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.632348 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d03281d-e24f-4882-9428-1e3e30ca70ae-scripts\") pod \"nova-cell1-cell-mapping-4qhhv\" (UID: \"4d03281d-e24f-4882-9428-1e3e30ca70ae\") " pod="openstack/nova-cell1-cell-mapping-4qhhv" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.632939 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d03281d-e24f-4882-9428-1e3e30ca70ae-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4qhhv\" (UID: \"4d03281d-e24f-4882-9428-1e3e30ca70ae\") " pod="openstack/nova-cell1-cell-mapping-4qhhv" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.632987 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d03281d-e24f-4882-9428-1e3e30ca70ae-config-data\") pod \"nova-cell1-cell-mapping-4qhhv\" (UID: \"4d03281d-e24f-4882-9428-1e3e30ca70ae\") " pod="openstack/nova-cell1-cell-mapping-4qhhv" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.653776 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qctk2\" (UniqueName: \"kubernetes.io/projected/4d03281d-e24f-4882-9428-1e3e30ca70ae-kube-api-access-qctk2\") pod \"nova-cell1-cell-mapping-4qhhv\" (UID: \"4d03281d-e24f-4882-9428-1e3e30ca70ae\") " pod="openstack/nova-cell1-cell-mapping-4qhhv" Dec 06 06:50:23 crc kubenswrapper[4823]: I1206 06:50:23.752400 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4qhhv" Dec 06 06:50:24 crc kubenswrapper[4823]: I1206 06:50:24.021363 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e1d1477-a236-458f-9b57-1d74fc56a92d","Type":"ContainerStarted","Data":"3f26327f99df271a1531cc4ba4c25f6699bca070964ad021aed9f0e2d5e71890"} Dec 06 06:50:24 crc kubenswrapper[4823]: I1206 06:50:24.021922 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d46bc7bf9-mxq8f" Dec 06 06:50:24 crc kubenswrapper[4823]: I1206 06:50:24.089085 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d46bc7bf9-mxq8f"] Dec 06 06:50:24 crc kubenswrapper[4823]: I1206 06:50:24.107192 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d46bc7bf9-mxq8f"] Dec 06 06:50:24 crc kubenswrapper[4823]: W1206 06:50:24.317684 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d03281d_e24f_4882_9428_1e3e30ca70ae.slice/crio-03fcc7bc411ff9d0010698877a7b20e29577601bdd994a1110e7460e0a2e37b4 WatchSource:0}: Error finding container 03fcc7bc411ff9d0010698877a7b20e29577601bdd994a1110e7460e0a2e37b4: Status 404 returned error can't find the container with id 03fcc7bc411ff9d0010698877a7b20e29577601bdd994a1110e7460e0a2e37b4 Dec 06 06:50:24 crc kubenswrapper[4823]: I1206 06:50:24.324534 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-4qhhv"] Dec 06 06:50:25 crc kubenswrapper[4823]: I1206 06:50:25.034504 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4qhhv" event={"ID":"4d03281d-e24f-4882-9428-1e3e30ca70ae","Type":"ContainerStarted","Data":"116d2b7444ef3d4081fce406e2824eced8461166365179783c15162e5b1c4fca"} Dec 06 06:50:25 crc kubenswrapper[4823]: I1206 06:50:25.035115 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4qhhv" event={"ID":"4d03281d-e24f-4882-9428-1e3e30ca70ae","Type":"ContainerStarted","Data":"03fcc7bc411ff9d0010698877a7b20e29577601bdd994a1110e7460e0a2e37b4"} Dec 06 06:50:25 crc kubenswrapper[4823]: I1206 06:50:25.039484 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e1d1477-a236-458f-9b57-1d74fc56a92d","Type":"ContainerStarted","Data":"e0ee13aebd4c64cb88b7329d0ef067ed5551a1be4f7430d63357f17f67eaec3d"} Dec 06 06:50:25 crc kubenswrapper[4823]: I1206 06:50:25.056030 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-4qhhv" podStartSLOduration=2.056004767 podStartE2EDuration="2.056004767s" podCreationTimestamp="2025-12-06 06:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:50:25.053828544 +0000 UTC m=+1526.339580504" watchObservedRunningTime="2025-12-06 06:50:25.056004767 +0000 UTC m=+1526.341756727" Dec 06 06:50:25 crc kubenswrapper[4823]: I1206 06:50:25.155331 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d601167c-c425-40f5-ad1d-9cf563033888" path="/var/lib/kubelet/pods/d601167c-c425-40f5-ad1d-9cf563033888/volumes" Dec 06 06:50:27 crc kubenswrapper[4823]: I1206 06:50:27.067989 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e1d1477-a236-458f-9b57-1d74fc56a92d","Type":"ContainerStarted","Data":"b12110a9f6968a0677e9ba23ba60af1396bf65f9734dc48af704bbbf4155bb61"} Dec 06 06:50:27 crc kubenswrapper[4823]: I1206 06:50:27.068606 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 06:50:27 crc kubenswrapper[4823]: I1206 06:50:27.096641 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.974392744 podStartE2EDuration="7.096611462s" podCreationTimestamp="2025-12-06 06:50:20 +0000 UTC" firstStartedPulling="2025-12-06 06:50:21.805490266 +0000 UTC m=+1523.091242226" lastFinishedPulling="2025-12-06 06:50:25.927708974 +0000 UTC m=+1527.213460944" observedRunningTime="2025-12-06 06:50:27.087592641 +0000 UTC m=+1528.373344601" watchObservedRunningTime="2025-12-06 06:50:27.096611462 +0000 UTC m=+1528.382363422" Dec 06 06:50:28 crc kubenswrapper[4823]: I1206 06:50:28.253246 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 06:50:28 crc kubenswrapper[4823]: I1206 06:50:28.253773 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 06:50:29 crc kubenswrapper[4823]: I1206 06:50:29.271856 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.218:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 06:50:29 crc kubenswrapper[4823]: I1206 06:50:29.271855 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.218:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 06:50:32 crc kubenswrapper[4823]: I1206 06:50:32.151303 4823 generic.go:334] "Generic (PLEG): container finished" podID="4d03281d-e24f-4882-9428-1e3e30ca70ae" containerID="116d2b7444ef3d4081fce406e2824eced8461166365179783c15162e5b1c4fca" exitCode=0 Dec 06 06:50:32 crc kubenswrapper[4823]: I1206 06:50:32.151821 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4qhhv" event={"ID":"4d03281d-e24f-4882-9428-1e3e30ca70ae","Type":"ContainerDied","Data":"116d2b7444ef3d4081fce406e2824eced8461166365179783c15162e5b1c4fca"} Dec 06 06:50:33 crc kubenswrapper[4823]: I1206 06:50:33.595944 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4qhhv" Dec 06 06:50:33 crc kubenswrapper[4823]: I1206 06:50:33.716215 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d03281d-e24f-4882-9428-1e3e30ca70ae-config-data\") pod \"4d03281d-e24f-4882-9428-1e3e30ca70ae\" (UID: \"4d03281d-e24f-4882-9428-1e3e30ca70ae\") " Dec 06 06:50:33 crc kubenswrapper[4823]: I1206 06:50:33.716310 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d03281d-e24f-4882-9428-1e3e30ca70ae-combined-ca-bundle\") pod \"4d03281d-e24f-4882-9428-1e3e30ca70ae\" (UID: \"4d03281d-e24f-4882-9428-1e3e30ca70ae\") " Dec 06 06:50:33 crc kubenswrapper[4823]: I1206 06:50:33.716535 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qctk2\" (UniqueName: \"kubernetes.io/projected/4d03281d-e24f-4882-9428-1e3e30ca70ae-kube-api-access-qctk2\") pod \"4d03281d-e24f-4882-9428-1e3e30ca70ae\" (UID: \"4d03281d-e24f-4882-9428-1e3e30ca70ae\") " Dec 06 06:50:33 crc kubenswrapper[4823]: I1206 06:50:33.716632 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d03281d-e24f-4882-9428-1e3e30ca70ae-scripts\") pod \"4d03281d-e24f-4882-9428-1e3e30ca70ae\" (UID: \"4d03281d-e24f-4882-9428-1e3e30ca70ae\") " Dec 06 06:50:33 crc kubenswrapper[4823]: I1206 06:50:33.723525 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d03281d-e24f-4882-9428-1e3e30ca70ae-scripts" (OuterVolumeSpecName: "scripts") pod "4d03281d-e24f-4882-9428-1e3e30ca70ae" (UID: "4d03281d-e24f-4882-9428-1e3e30ca70ae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:50:33 crc kubenswrapper[4823]: I1206 06:50:33.725408 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d03281d-e24f-4882-9428-1e3e30ca70ae-kube-api-access-qctk2" (OuterVolumeSpecName: "kube-api-access-qctk2") pod "4d03281d-e24f-4882-9428-1e3e30ca70ae" (UID: "4d03281d-e24f-4882-9428-1e3e30ca70ae"). InnerVolumeSpecName "kube-api-access-qctk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:50:33 crc kubenswrapper[4823]: I1206 06:50:33.748530 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d03281d-e24f-4882-9428-1e3e30ca70ae-config-data" (OuterVolumeSpecName: "config-data") pod "4d03281d-e24f-4882-9428-1e3e30ca70ae" (UID: "4d03281d-e24f-4882-9428-1e3e30ca70ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:50:33 crc kubenswrapper[4823]: I1206 06:50:33.753455 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d03281d-e24f-4882-9428-1e3e30ca70ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d03281d-e24f-4882-9428-1e3e30ca70ae" (UID: "4d03281d-e24f-4882-9428-1e3e30ca70ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:50:33 crc kubenswrapper[4823]: I1206 06:50:33.820682 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d03281d-e24f-4882-9428-1e3e30ca70ae-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:33 crc kubenswrapper[4823]: I1206 06:50:33.820727 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d03281d-e24f-4882-9428-1e3e30ca70ae-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:33 crc kubenswrapper[4823]: I1206 06:50:33.820740 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d03281d-e24f-4882-9428-1e3e30ca70ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:33 crc kubenswrapper[4823]: I1206 06:50:33.820755 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qctk2\" (UniqueName: \"kubernetes.io/projected/4d03281d-e24f-4882-9428-1e3e30ca70ae-kube-api-access-qctk2\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:34 crc kubenswrapper[4823]: I1206 06:50:34.173912 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4qhhv" event={"ID":"4d03281d-e24f-4882-9428-1e3e30ca70ae","Type":"ContainerDied","Data":"03fcc7bc411ff9d0010698877a7b20e29577601bdd994a1110e7460e0a2e37b4"} Dec 06 06:50:34 crc kubenswrapper[4823]: I1206 06:50:34.174188 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03fcc7bc411ff9d0010698877a7b20e29577601bdd994a1110e7460e0a2e37b4" Dec 06 06:50:34 crc kubenswrapper[4823]: I1206 06:50:34.173975 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4qhhv" Dec 06 06:50:34 crc kubenswrapper[4823]: I1206 06:50:34.480128 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 06:50:34 crc kubenswrapper[4823]: I1206 06:50:34.480502 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585" containerName="nova-api-log" containerID="cri-o://48b1ee255e4e9cd71f273733ac3dd64a4cca3bd1b9771a5d3e045982c1355ec5" gracePeriod=30 Dec 06 06:50:34 crc kubenswrapper[4823]: I1206 06:50:34.480704 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585" containerName="nova-api-api" containerID="cri-o://e2025e29ba2c9006daa292d642bcbf382e312f4c8b946c327614d901d2e67fc8" gracePeriod=30 Dec 06 06:50:34 crc kubenswrapper[4823]: I1206 06:50:34.499957 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 06:50:34 crc kubenswrapper[4823]: I1206 06:50:34.500519 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ac353c50-086b-4a10-9976-71287895e09f" containerName="nova-scheduler-scheduler" containerID="cri-o://8050b2abf5072b7d532126d7aebbc18b68183dc967d9d10774a621c77ac09235" gracePeriod=30 Dec 06 06:50:34 crc kubenswrapper[4823]: I1206 06:50:34.513544 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 06:50:34 crc kubenswrapper[4823]: I1206 06:50:34.513955 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="17bcdcc5-1bc7-456d-9574-d6fe00683166" containerName="nova-metadata-log" containerID="cri-o://938a2223669fddc1cb8aaf78ff68285035cff8ad5e544466f4c898c0d8c743ca" gracePeriod=30 Dec 06 06:50:34 crc kubenswrapper[4823]: I1206 06:50:34.515016 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="17bcdcc5-1bc7-456d-9574-d6fe00683166" containerName="nova-metadata-metadata" containerID="cri-o://b50c56e76a54547cca34dd072b59bd5371095ca471fc4c00ec95a1c9f8d4f725" gracePeriod=30 Dec 06 06:50:35 crc kubenswrapper[4823]: I1206 06:50:35.210262 4823 generic.go:334] "Generic (PLEG): container finished" podID="17bcdcc5-1bc7-456d-9574-d6fe00683166" containerID="938a2223669fddc1cb8aaf78ff68285035cff8ad5e544466f4c898c0d8c743ca" exitCode=143 Dec 06 06:50:35 crc kubenswrapper[4823]: I1206 06:50:35.210427 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"17bcdcc5-1bc7-456d-9574-d6fe00683166","Type":"ContainerDied","Data":"938a2223669fddc1cb8aaf78ff68285035cff8ad5e544466f4c898c0d8c743ca"} Dec 06 06:50:35 crc kubenswrapper[4823]: I1206 06:50:35.231156 4823 generic.go:334] "Generic (PLEG): container finished" podID="3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585" containerID="48b1ee255e4e9cd71f273733ac3dd64a4cca3bd1b9771a5d3e045982c1355ec5" exitCode=143 Dec 06 06:50:35 crc kubenswrapper[4823]: I1206 06:50:35.231549 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585","Type":"ContainerDied","Data":"48b1ee255e4e9cd71f273733ac3dd64a4cca3bd1b9771a5d3e045982c1355ec5"} Dec 06 06:50:36 crc kubenswrapper[4823]: I1206 06:50:36.051649 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:50:36 crc kubenswrapper[4823]: I1206 06:50:36.051747 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:50:36 crc kubenswrapper[4823]: I1206 06:50:36.258816 4823 generic.go:334] "Generic (PLEG): container finished" podID="3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585" containerID="e2025e29ba2c9006daa292d642bcbf382e312f4c8b946c327614d901d2e67fc8" exitCode=0 Dec 06 06:50:36 crc kubenswrapper[4823]: I1206 06:50:36.258901 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585","Type":"ContainerDied","Data":"e2025e29ba2c9006daa292d642bcbf382e312f4c8b946c327614d901d2e67fc8"} Dec 06 06:50:36 crc kubenswrapper[4823]: I1206 06:50:36.261047 4823 generic.go:334] "Generic (PLEG): container finished" podID="17bcdcc5-1bc7-456d-9574-d6fe00683166" containerID="b50c56e76a54547cca34dd072b59bd5371095ca471fc4c00ec95a1c9f8d4f725" exitCode=0 Dec 06 06:50:36 crc kubenswrapper[4823]: I1206 06:50:36.261123 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"17bcdcc5-1bc7-456d-9574-d6fe00683166","Type":"ContainerDied","Data":"b50c56e76a54547cca34dd072b59bd5371095ca471fc4c00ec95a1c9f8d4f725"} Dec 06 06:50:36 crc kubenswrapper[4823]: I1206 06:50:36.688545 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 06:50:36 crc kubenswrapper[4823]: I1206 06:50:36.691909 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-public-tls-certs\") pod \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\" (UID: \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\") " Dec 06 06:50:36 crc kubenswrapper[4823]: I1206 06:50:36.692086 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdf57\" (UniqueName: \"kubernetes.io/projected/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-kube-api-access-pdf57\") pod \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\" (UID: \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\") " Dec 06 06:50:36 crc kubenswrapper[4823]: I1206 06:50:36.692138 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-config-data\") pod \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\" (UID: \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\") " Dec 06 06:50:36 crc kubenswrapper[4823]: I1206 06:50:36.692257 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-internal-tls-certs\") pod \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\" (UID: \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\") " Dec 06 06:50:36 crc kubenswrapper[4823]: I1206 06:50:36.692311 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-logs\") pod \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\" (UID: \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\") " Dec 06 06:50:36 crc kubenswrapper[4823]: I1206 06:50:36.692423 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-combined-ca-bundle\") pod \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\" (UID: \"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585\") " Dec 06 06:50:36 crc kubenswrapper[4823]: I1206 06:50:36.694103 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-logs" (OuterVolumeSpecName: "logs") pod "3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585" (UID: "3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:50:36 crc kubenswrapper[4823]: I1206 06:50:36.702040 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-kube-api-access-pdf57" (OuterVolumeSpecName: "kube-api-access-pdf57") pod "3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585" (UID: "3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585"). InnerVolumeSpecName "kube-api-access-pdf57". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:50:36 crc kubenswrapper[4823]: I1206 06:50:36.785065 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-config-data" (OuterVolumeSpecName: "config-data") pod "3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585" (UID: "3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:50:36 crc kubenswrapper[4823]: I1206 06:50:36.789479 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585" (UID: "3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:50:36 crc kubenswrapper[4823]: I1206 06:50:36.803386 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-logs\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:36 crc kubenswrapper[4823]: I1206 06:50:36.804030 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:36 crc kubenswrapper[4823]: I1206 06:50:36.804108 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdf57\" (UniqueName: \"kubernetes.io/projected/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-kube-api-access-pdf57\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:36 crc kubenswrapper[4823]: I1206 06:50:36.804196 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:36 crc kubenswrapper[4823]: I1206 06:50:36.820630 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585" (UID: "3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:50:36 crc kubenswrapper[4823]: I1206 06:50:36.848937 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585" (UID: "3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:50:36 crc kubenswrapper[4823]: I1206 06:50:36.878160 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 06:50:36 crc kubenswrapper[4823]: I1206 06:50:36.912569 4823 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:36 crc kubenswrapper[4823]: I1206 06:50:36.912614 4823 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.014369 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bcdcc5-1bc7-456d-9574-d6fe00683166-combined-ca-bundle\") pod \"17bcdcc5-1bc7-456d-9574-d6fe00683166\" (UID: \"17bcdcc5-1bc7-456d-9574-d6fe00683166\") " Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.014687 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vrfs\" (UniqueName: \"kubernetes.io/projected/17bcdcc5-1bc7-456d-9574-d6fe00683166-kube-api-access-6vrfs\") pod \"17bcdcc5-1bc7-456d-9574-d6fe00683166\" (UID: \"17bcdcc5-1bc7-456d-9574-d6fe00683166\") " Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.014779 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/17bcdcc5-1bc7-456d-9574-d6fe00683166-nova-metadata-tls-certs\") pod \"17bcdcc5-1bc7-456d-9574-d6fe00683166\" (UID: \"17bcdcc5-1bc7-456d-9574-d6fe00683166\") " Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.014903 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17bcdcc5-1bc7-456d-9574-d6fe00683166-logs\") pod \"17bcdcc5-1bc7-456d-9574-d6fe00683166\" (UID: \"17bcdcc5-1bc7-456d-9574-d6fe00683166\") " Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.014971 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17bcdcc5-1bc7-456d-9574-d6fe00683166-config-data\") pod \"17bcdcc5-1bc7-456d-9574-d6fe00683166\" (UID: \"17bcdcc5-1bc7-456d-9574-d6fe00683166\") " Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.015370 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17bcdcc5-1bc7-456d-9574-d6fe00683166-logs" (OuterVolumeSpecName: "logs") pod "17bcdcc5-1bc7-456d-9574-d6fe00683166" (UID: "17bcdcc5-1bc7-456d-9574-d6fe00683166"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.020566 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17bcdcc5-1bc7-456d-9574-d6fe00683166-kube-api-access-6vrfs" (OuterVolumeSpecName: "kube-api-access-6vrfs") pod "17bcdcc5-1bc7-456d-9574-d6fe00683166" (UID: "17bcdcc5-1bc7-456d-9574-d6fe00683166"). InnerVolumeSpecName "kube-api-access-6vrfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.055950 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17bcdcc5-1bc7-456d-9574-d6fe00683166-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17bcdcc5-1bc7-456d-9574-d6fe00683166" (UID: "17bcdcc5-1bc7-456d-9574-d6fe00683166"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.066561 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17bcdcc5-1bc7-456d-9574-d6fe00683166-config-data" (OuterVolumeSpecName: "config-data") pod "17bcdcc5-1bc7-456d-9574-d6fe00683166" (UID: "17bcdcc5-1bc7-456d-9574-d6fe00683166"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.113134 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17bcdcc5-1bc7-456d-9574-d6fe00683166-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "17bcdcc5-1bc7-456d-9574-d6fe00683166" (UID: "17bcdcc5-1bc7-456d-9574-d6fe00683166"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.118167 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bcdcc5-1bc7-456d-9574-d6fe00683166-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.118213 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vrfs\" (UniqueName: \"kubernetes.io/projected/17bcdcc5-1bc7-456d-9574-d6fe00683166-kube-api-access-6vrfs\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.118226 4823 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/17bcdcc5-1bc7-456d-9574-d6fe00683166-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.118235 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17bcdcc5-1bc7-456d-9574-d6fe00683166-logs\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.118245 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17bcdcc5-1bc7-456d-9574-d6fe00683166-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.277125 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"17bcdcc5-1bc7-456d-9574-d6fe00683166","Type":"ContainerDied","Data":"03007b8c113f1f915940f4ad4d4c4b634facad504d198412ba40901cb7892578"} Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.277198 4823 scope.go:117] "RemoveContainer" containerID="b50c56e76a54547cca34dd072b59bd5371095ca471fc4c00ec95a1c9f8d4f725" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.277406 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.282849 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585","Type":"ContainerDied","Data":"b027379f7e58a989633fd418d9368ef5fead0940d850e8ae622f66017910bf27"} Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.283070 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.346087 4823 scope.go:117] "RemoveContainer" containerID="938a2223669fddc1cb8aaf78ff68285035cff8ad5e544466f4c898c0d8c743ca" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.364718 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.389820 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.396069 4823 scope.go:117] "RemoveContainer" containerID="e2025e29ba2c9006daa292d642bcbf382e312f4c8b946c327614d901d2e67fc8" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.407762 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.414672 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.432122 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 06 06:50:37 crc kubenswrapper[4823]: E1206 06:50:37.433013 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17bcdcc5-1bc7-456d-9574-d6fe00683166" containerName="nova-metadata-log" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.433103 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="17bcdcc5-1bc7-456d-9574-d6fe00683166" containerName="nova-metadata-log" Dec 06 06:50:37 crc kubenswrapper[4823]: E1206 06:50:37.433131 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17bcdcc5-1bc7-456d-9574-d6fe00683166" containerName="nova-metadata-metadata" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.433140 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="17bcdcc5-1bc7-456d-9574-d6fe00683166" containerName="nova-metadata-metadata" Dec 06 06:50:37 crc kubenswrapper[4823]: E1206 06:50:37.433172 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d03281d-e24f-4882-9428-1e3e30ca70ae" containerName="nova-manage" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.433180 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d03281d-e24f-4882-9428-1e3e30ca70ae" containerName="nova-manage" Dec 06 06:50:37 crc kubenswrapper[4823]: E1206 06:50:37.433202 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585" containerName="nova-api-log" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.433210 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585" containerName="nova-api-log" Dec 06 06:50:37 crc kubenswrapper[4823]: E1206 06:50:37.433238 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585" containerName="nova-api-api" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.433246 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585" containerName="nova-api-api" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.433498 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585" containerName="nova-api-api" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.433518 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="17bcdcc5-1bc7-456d-9574-d6fe00683166" containerName="nova-metadata-metadata" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.433529 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="17bcdcc5-1bc7-456d-9574-d6fe00683166" containerName="nova-metadata-log" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.433539 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d03281d-e24f-4882-9428-1e3e30ca70ae" containerName="nova-manage" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.433575 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585" containerName="nova-api-log" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.435499 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.442311 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.442554 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.444749 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.456132 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.458459 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.476580 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.477106 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.477440 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.481762 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.518496 4823 scope.go:117] "RemoveContainer" containerID="48b1ee255e4e9cd71f273733ac3dd64a4cca3bd1b9771a5d3e045982c1355ec5" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.631307 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb\") " pod="openstack/nova-metadata-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.631396 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb\") " pod="openstack/nova-metadata-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.631511 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc04758-e28e-4ed1-8abb-e2cc94b0662c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8cc04758-e28e-4ed1-8abb-e2cc94b0662c\") " pod="openstack/nova-api-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.631537 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmkns\" (UniqueName: \"kubernetes.io/projected/b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb-kube-api-access-wmkns\") pod \"nova-metadata-0\" (UID: \"b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb\") " pod="openstack/nova-metadata-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.631622 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5mks\" (UniqueName: \"kubernetes.io/projected/8cc04758-e28e-4ed1-8abb-e2cc94b0662c-kube-api-access-n5mks\") pod \"nova-api-0\" (UID: \"8cc04758-e28e-4ed1-8abb-e2cc94b0662c\") " pod="openstack/nova-api-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.631709 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb-logs\") pod \"nova-metadata-0\" (UID: \"b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb\") " pod="openstack/nova-metadata-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.631749 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc04758-e28e-4ed1-8abb-e2cc94b0662c-config-data\") pod \"nova-api-0\" (UID: \"8cc04758-e28e-4ed1-8abb-e2cc94b0662c\") " pod="openstack/nova-api-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.631809 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cc04758-e28e-4ed1-8abb-e2cc94b0662c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8cc04758-e28e-4ed1-8abb-e2cc94b0662c\") " pod="openstack/nova-api-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.631875 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb-config-data\") pod \"nova-metadata-0\" (UID: \"b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb\") " pod="openstack/nova-metadata-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.631903 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cc04758-e28e-4ed1-8abb-e2cc94b0662c-logs\") pod \"nova-api-0\" (UID: \"8cc04758-e28e-4ed1-8abb-e2cc94b0662c\") " pod="openstack/nova-api-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.631929 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cc04758-e28e-4ed1-8abb-e2cc94b0662c-public-tls-certs\") pod \"nova-api-0\" (UID: \"8cc04758-e28e-4ed1-8abb-e2cc94b0662c\") " pod="openstack/nova-api-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.733487 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc04758-e28e-4ed1-8abb-e2cc94b0662c-config-data\") pod \"nova-api-0\" (UID: \"8cc04758-e28e-4ed1-8abb-e2cc94b0662c\") " pod="openstack/nova-api-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.733575 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cc04758-e28e-4ed1-8abb-e2cc94b0662c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8cc04758-e28e-4ed1-8abb-e2cc94b0662c\") " pod="openstack/nova-api-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.733599 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb-config-data\") pod \"nova-metadata-0\" (UID: \"b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb\") " pod="openstack/nova-metadata-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.733620 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cc04758-e28e-4ed1-8abb-e2cc94b0662c-logs\") pod \"nova-api-0\" (UID: \"8cc04758-e28e-4ed1-8abb-e2cc94b0662c\") " pod="openstack/nova-api-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.733641 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cc04758-e28e-4ed1-8abb-e2cc94b0662c-public-tls-certs\") pod \"nova-api-0\" (UID: \"8cc04758-e28e-4ed1-8abb-e2cc94b0662c\") " pod="openstack/nova-api-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.733689 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb\") " pod="openstack/nova-metadata-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.733734 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb\") " pod="openstack/nova-metadata-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.733823 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc04758-e28e-4ed1-8abb-e2cc94b0662c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8cc04758-e28e-4ed1-8abb-e2cc94b0662c\") " pod="openstack/nova-api-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.733850 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmkns\" (UniqueName: \"kubernetes.io/projected/b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb-kube-api-access-wmkns\") pod \"nova-metadata-0\" (UID: \"b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb\") " pod="openstack/nova-metadata-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.733892 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5mks\" (UniqueName: \"kubernetes.io/projected/8cc04758-e28e-4ed1-8abb-e2cc94b0662c-kube-api-access-n5mks\") pod \"nova-api-0\" (UID: \"8cc04758-e28e-4ed1-8abb-e2cc94b0662c\") " pod="openstack/nova-api-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.733940 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb-logs\") pod \"nova-metadata-0\" (UID: \"b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb\") " pod="openstack/nova-metadata-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.734770 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb-logs\") pod \"nova-metadata-0\" (UID: \"b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb\") " pod="openstack/nova-metadata-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.735347 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cc04758-e28e-4ed1-8abb-e2cc94b0662c-logs\") pod \"nova-api-0\" (UID: \"8cc04758-e28e-4ed1-8abb-e2cc94b0662c\") " pod="openstack/nova-api-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.740935 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb-config-data\") pod \"nova-metadata-0\" (UID: \"b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb\") " pod="openstack/nova-metadata-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.741711 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb\") " pod="openstack/nova-metadata-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.742050 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc04758-e28e-4ed1-8abb-e2cc94b0662c-config-data\") pod \"nova-api-0\" (UID: \"8cc04758-e28e-4ed1-8abb-e2cc94b0662c\") " pod="openstack/nova-api-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.742387 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc04758-e28e-4ed1-8abb-e2cc94b0662c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8cc04758-e28e-4ed1-8abb-e2cc94b0662c\") " pod="openstack/nova-api-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.747698 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cc04758-e28e-4ed1-8abb-e2cc94b0662c-public-tls-certs\") pod \"nova-api-0\" (UID: \"8cc04758-e28e-4ed1-8abb-e2cc94b0662c\") " pod="openstack/nova-api-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.748082 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cc04758-e28e-4ed1-8abb-e2cc94b0662c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8cc04758-e28e-4ed1-8abb-e2cc94b0662c\") " pod="openstack/nova-api-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.754094 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmkns\" (UniqueName: \"kubernetes.io/projected/b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb-kube-api-access-wmkns\") pod \"nova-metadata-0\" (UID: \"b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb\") " pod="openstack/nova-metadata-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.754356 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb\") " pod="openstack/nova-metadata-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.756015 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5mks\" (UniqueName: \"kubernetes.io/projected/8cc04758-e28e-4ed1-8abb-e2cc94b0662c-kube-api-access-n5mks\") pod \"nova-api-0\" (UID: \"8cc04758-e28e-4ed1-8abb-e2cc94b0662c\") " pod="openstack/nova-api-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.783818 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 06:50:37 crc kubenswrapper[4823]: I1206 06:50:37.826068 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 06:50:38 crc kubenswrapper[4823]: I1206 06:50:38.319011 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 06:50:38 crc kubenswrapper[4823]: I1206 06:50:38.428166 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 06:50:38 crc kubenswrapper[4823]: E1206 06:50:38.760557 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8050b2abf5072b7d532126d7aebbc18b68183dc967d9d10774a621c77ac09235" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 06:50:38 crc kubenswrapper[4823]: E1206 06:50:38.766111 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8050b2abf5072b7d532126d7aebbc18b68183dc967d9d10774a621c77ac09235" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 06:50:38 crc kubenswrapper[4823]: E1206 06:50:38.768543 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8050b2abf5072b7d532126d7aebbc18b68183dc967d9d10774a621c77ac09235" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 06:50:38 crc kubenswrapper[4823]: E1206 06:50:38.768608 4823 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="ac353c50-086b-4a10-9976-71287895e09f" containerName="nova-scheduler-scheduler" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.158072 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17bcdcc5-1bc7-456d-9574-d6fe00683166" path="/var/lib/kubelet/pods/17bcdcc5-1bc7-456d-9574-d6fe00683166/volumes" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.160153 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585" path="/var/lib/kubelet/pods/3d4bf3b7-41e0-4faa-bf7b-4e42f6ee2585/volumes" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.279386 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.333975 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb","Type":"ContainerStarted","Data":"8bd1f40eb46bb024a3c9802041da18b3c512866d24f6a7d138a3cae765aa0647"} Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.334036 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb","Type":"ContainerStarted","Data":"1594896362fb13edf4caef741f70179626510e65cdf0cba25c138ef5417fc95e"} Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.334050 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb","Type":"ContainerStarted","Data":"7e63365991cbe77bc07c6fda33d86264f17ac1defd4dcf5d7bc9136f308ca371"} Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.346309 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8cc04758-e28e-4ed1-8abb-e2cc94b0662c","Type":"ContainerStarted","Data":"dda18e8667d779eb4cf6e5d7b9223c63c7698c8ca93d2a1fae60574d5a446655"} Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.346371 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8cc04758-e28e-4ed1-8abb-e2cc94b0662c","Type":"ContainerStarted","Data":"2a3649237699971e58ba3b76305b1a2a075c8c150916a1cfde6181e87e3d17a7"} Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.346384 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8cc04758-e28e-4ed1-8abb-e2cc94b0662c","Type":"ContainerStarted","Data":"c62e68f52e3a45a36f681ce7a2615453bd98f1e25aa3de5672735b886ca22947"} Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.351735 4823 generic.go:334] "Generic (PLEG): container finished" podID="ac353c50-086b-4a10-9976-71287895e09f" containerID="8050b2abf5072b7d532126d7aebbc18b68183dc967d9d10774a621c77ac09235" exitCode=0 Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.351793 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac353c50-086b-4a10-9976-71287895e09f","Type":"ContainerDied","Data":"8050b2abf5072b7d532126d7aebbc18b68183dc967d9d10774a621c77ac09235"} Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.351831 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac353c50-086b-4a10-9976-71287895e09f","Type":"ContainerDied","Data":"573f276cc9e4bf7aad5aef0a9576854ec8b725a7be1e6520f12690546d6ef036"} Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.351858 4823 scope.go:117] "RemoveContainer" containerID="8050b2abf5072b7d532126d7aebbc18b68183dc967d9d10774a621c77ac09235" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.352566 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.372638 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac353c50-086b-4a10-9976-71287895e09f-combined-ca-bundle\") pod \"ac353c50-086b-4a10-9976-71287895e09f\" (UID: \"ac353c50-086b-4a10-9976-71287895e09f\") " Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.372782 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt9kl\" (UniqueName: \"kubernetes.io/projected/ac353c50-086b-4a10-9976-71287895e09f-kube-api-access-zt9kl\") pod \"ac353c50-086b-4a10-9976-71287895e09f\" (UID: \"ac353c50-086b-4a10-9976-71287895e09f\") " Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.373006 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac353c50-086b-4a10-9976-71287895e09f-config-data\") pod \"ac353c50-086b-4a10-9976-71287895e09f\" (UID: \"ac353c50-086b-4a10-9976-71287895e09f\") " Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.389372 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac353c50-086b-4a10-9976-71287895e09f-kube-api-access-zt9kl" (OuterVolumeSpecName: "kube-api-access-zt9kl") pod "ac353c50-086b-4a10-9976-71287895e09f" (UID: "ac353c50-086b-4a10-9976-71287895e09f"). InnerVolumeSpecName "kube-api-access-zt9kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.393407 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.393375223 podStartE2EDuration="2.393375223s" podCreationTimestamp="2025-12-06 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:50:39.368194543 +0000 UTC m=+1540.653946503" watchObservedRunningTime="2025-12-06 06:50:39.393375223 +0000 UTC m=+1540.679127183" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.413328 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.41329566 podStartE2EDuration="2.41329566s" podCreationTimestamp="2025-12-06 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:50:39.400749196 +0000 UTC m=+1540.686501176" watchObservedRunningTime="2025-12-06 06:50:39.41329566 +0000 UTC m=+1540.699047620" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.415913 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac353c50-086b-4a10-9976-71287895e09f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac353c50-086b-4a10-9976-71287895e09f" (UID: "ac353c50-086b-4a10-9976-71287895e09f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.434015 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac353c50-086b-4a10-9976-71287895e09f-config-data" (OuterVolumeSpecName: "config-data") pod "ac353c50-086b-4a10-9976-71287895e09f" (UID: "ac353c50-086b-4a10-9976-71287895e09f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.478026 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac353c50-086b-4a10-9976-71287895e09f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.478189 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt9kl\" (UniqueName: \"kubernetes.io/projected/ac353c50-086b-4a10-9976-71287895e09f-kube-api-access-zt9kl\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.478205 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac353c50-086b-4a10-9976-71287895e09f-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.547114 4823 scope.go:117] "RemoveContainer" containerID="8050b2abf5072b7d532126d7aebbc18b68183dc967d9d10774a621c77ac09235" Dec 06 06:50:39 crc kubenswrapper[4823]: E1206 06:50:39.547980 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8050b2abf5072b7d532126d7aebbc18b68183dc967d9d10774a621c77ac09235\": container with ID starting with 8050b2abf5072b7d532126d7aebbc18b68183dc967d9d10774a621c77ac09235 not found: ID does not exist" containerID="8050b2abf5072b7d532126d7aebbc18b68183dc967d9d10774a621c77ac09235" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.548032 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8050b2abf5072b7d532126d7aebbc18b68183dc967d9d10774a621c77ac09235"} err="failed to get container status \"8050b2abf5072b7d532126d7aebbc18b68183dc967d9d10774a621c77ac09235\": rpc error: code = NotFound desc = could not find container \"8050b2abf5072b7d532126d7aebbc18b68183dc967d9d10774a621c77ac09235\": container with ID starting with 8050b2abf5072b7d532126d7aebbc18b68183dc967d9d10774a621c77ac09235 not found: ID does not exist" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.691596 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.708250 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.719833 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 06:50:39 crc kubenswrapper[4823]: E1206 06:50:39.721610 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac353c50-086b-4a10-9976-71287895e09f" containerName="nova-scheduler-scheduler" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.721696 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac353c50-086b-4a10-9976-71287895e09f" containerName="nova-scheduler-scheduler" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.721972 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac353c50-086b-4a10-9976-71287895e09f" containerName="nova-scheduler-scheduler" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.723418 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.726278 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.734676 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.886499 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20589719-3a87-43d3-bc79-0450142879ab-config-data\") pod \"nova-scheduler-0\" (UID: \"20589719-3a87-43d3-bc79-0450142879ab\") " pod="openstack/nova-scheduler-0" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.886785 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cz4j\" (UniqueName: \"kubernetes.io/projected/20589719-3a87-43d3-bc79-0450142879ab-kube-api-access-2cz4j\") pod \"nova-scheduler-0\" (UID: \"20589719-3a87-43d3-bc79-0450142879ab\") " pod="openstack/nova-scheduler-0" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.886857 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20589719-3a87-43d3-bc79-0450142879ab-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"20589719-3a87-43d3-bc79-0450142879ab\") " pod="openstack/nova-scheduler-0" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.989530 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20589719-3a87-43d3-bc79-0450142879ab-config-data\") pod \"nova-scheduler-0\" (UID: \"20589719-3a87-43d3-bc79-0450142879ab\") " pod="openstack/nova-scheduler-0" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.989620 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cz4j\" (UniqueName: \"kubernetes.io/projected/20589719-3a87-43d3-bc79-0450142879ab-kube-api-access-2cz4j\") pod \"nova-scheduler-0\" (UID: \"20589719-3a87-43d3-bc79-0450142879ab\") " pod="openstack/nova-scheduler-0" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.989646 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20589719-3a87-43d3-bc79-0450142879ab-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"20589719-3a87-43d3-bc79-0450142879ab\") " pod="openstack/nova-scheduler-0" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.993444 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20589719-3a87-43d3-bc79-0450142879ab-config-data\") pod \"nova-scheduler-0\" (UID: \"20589719-3a87-43d3-bc79-0450142879ab\") " pod="openstack/nova-scheduler-0" Dec 06 06:50:39 crc kubenswrapper[4823]: I1206 06:50:39.993974 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20589719-3a87-43d3-bc79-0450142879ab-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"20589719-3a87-43d3-bc79-0450142879ab\") " pod="openstack/nova-scheduler-0" Dec 06 06:50:40 crc kubenswrapper[4823]: I1206 06:50:40.009818 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cz4j\" (UniqueName: \"kubernetes.io/projected/20589719-3a87-43d3-bc79-0450142879ab-kube-api-access-2cz4j\") pod \"nova-scheduler-0\" (UID: \"20589719-3a87-43d3-bc79-0450142879ab\") " pod="openstack/nova-scheduler-0" Dec 06 06:50:40 crc kubenswrapper[4823]: I1206 06:50:40.062294 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 06:50:40 crc kubenswrapper[4823]: I1206 06:50:40.182476 4823 scope.go:117] "RemoveContainer" containerID="d9932ec16bd69a77bb4af4b0c40d2c82a078aa359ca3fa24036c6414537d1360" Dec 06 06:50:40 crc kubenswrapper[4823]: I1206 06:50:40.612341 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 06:50:41 crc kubenswrapper[4823]: I1206 06:50:41.177822 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac353c50-086b-4a10-9976-71287895e09f" path="/var/lib/kubelet/pods/ac353c50-086b-4a10-9976-71287895e09f/volumes" Dec 06 06:50:41 crc kubenswrapper[4823]: I1206 06:50:41.389864 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"20589719-3a87-43d3-bc79-0450142879ab","Type":"ContainerStarted","Data":"5c323d742a2669b5bb80b2e29f2ec25f428d82cac9ca77009950b772dc4f5505"} Dec 06 06:50:41 crc kubenswrapper[4823]: I1206 06:50:41.390180 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"20589719-3a87-43d3-bc79-0450142879ab","Type":"ContainerStarted","Data":"53ecc0064074d0394b5b2f2e853a46797c310d875a52e4d49c1d2a6463fca448"} Dec 06 06:50:41 crc kubenswrapper[4823]: I1206 06:50:41.417400 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.4173748059999998 podStartE2EDuration="2.417374806s" podCreationTimestamp="2025-12-06 06:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:50:41.411279149 +0000 UTC m=+1542.697031109" watchObservedRunningTime="2025-12-06 06:50:41.417374806 +0000 UTC m=+1542.703126766" Dec 06 06:50:42 crc kubenswrapper[4823]: I1206 06:50:42.784817 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 06:50:42 crc kubenswrapper[4823]: I1206 06:50:42.785958 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 06:50:45 crc kubenswrapper[4823]: I1206 06:50:45.063637 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 06 06:50:47 crc kubenswrapper[4823]: I1206 06:50:47.784947 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 06 06:50:47 crc kubenswrapper[4823]: I1206 06:50:47.785293 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 06 06:50:47 crc kubenswrapper[4823]: I1206 06:50:47.826966 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 06:50:47 crc kubenswrapper[4823]: I1206 06:50:47.827253 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 06:50:48 crc kubenswrapper[4823]: I1206 06:50:48.816243 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.221:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 06:50:48 crc kubenswrapper[4823]: I1206 06:50:48.816257 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.221:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 06:50:48 crc kubenswrapper[4823]: I1206 06:50:48.838944 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8cc04758-e28e-4ed1-8abb-e2cc94b0662c" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.222:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 06:50:48 crc kubenswrapper[4823]: I1206 06:50:48.839288 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8cc04758-e28e-4ed1-8abb-e2cc94b0662c" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.222:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 06:50:50 crc kubenswrapper[4823]: I1206 06:50:50.063691 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 06 06:50:50 crc kubenswrapper[4823]: I1206 06:50:50.103733 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 06 06:50:50 crc kubenswrapper[4823]: I1206 06:50:50.529365 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 06 06:50:51 crc kubenswrapper[4823]: I1206 06:50:51.370842 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 06 06:50:57 crc kubenswrapper[4823]: I1206 06:50:57.793894 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 06 06:50:57 crc kubenswrapper[4823]: I1206 06:50:57.797996 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 06 06:50:57 crc kubenswrapper[4823]: I1206 06:50:57.803092 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 06 06:50:57 crc kubenswrapper[4823]: I1206 06:50:57.835884 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 06 06:50:57 crc kubenswrapper[4823]: I1206 06:50:57.836270 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 06 06:50:57 crc kubenswrapper[4823]: I1206 06:50:57.839624 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 06 06:50:57 crc kubenswrapper[4823]: I1206 06:50:57.858125 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 06 06:50:58 crc kubenswrapper[4823]: I1206 06:50:58.609363 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 06 06:50:58 crc kubenswrapper[4823]: I1206 06:50:58.615553 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 06 06:50:58 crc kubenswrapper[4823]: I1206 06:50:58.628996 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 06 06:51:06 crc kubenswrapper[4823]: I1206 06:51:06.052718 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:51:06 crc kubenswrapper[4823]: I1206 06:51:06.053899 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:51:08 crc kubenswrapper[4823]: I1206 06:51:08.065547 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 06:51:09 crc kubenswrapper[4823]: I1206 06:51:09.088980 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 06:51:12 crc kubenswrapper[4823]: I1206 06:51:12.380326 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="3c7ecce4-d359-486f-9386-057202b69efd" containerName="rabbitmq" containerID="cri-o://9a13cf91bdfbbbe373c5175e3d6d15d45934ed383cd43480359bc8464e82914c" gracePeriod=604796 Dec 06 06:51:13 crc kubenswrapper[4823]: I1206 06:51:13.039475 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="3c7ecce4-d359-486f-9386-057202b69efd" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Dec 06 06:51:13 crc kubenswrapper[4823]: I1206 06:51:13.394380 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="807fbfb1-90fe-4325-a0ac-09b309c77172" containerName="rabbitmq" containerID="cri-o://2aa6f450971c1e8a9d97d5edad42e9ba460acafa208bade36d92d0f33d45a43d" gracePeriod=604796 Dec 06 06:51:13 crc kubenswrapper[4823]: I1206 06:51:13.810264 4823 generic.go:334] "Generic (PLEG): container finished" podID="3c7ecce4-d359-486f-9386-057202b69efd" containerID="9a13cf91bdfbbbe373c5175e3d6d15d45934ed383cd43480359bc8464e82914c" exitCode=0 Dec 06 06:51:13 crc kubenswrapper[4823]: I1206 06:51:13.810625 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3c7ecce4-d359-486f-9386-057202b69efd","Type":"ContainerDied","Data":"9a13cf91bdfbbbe373c5175e3d6d15d45934ed383cd43480359bc8464e82914c"} Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.026928 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.201486 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c7ecce4-d359-486f-9386-057202b69efd-erlang-cookie-secret\") pod \"3c7ecce4-d359-486f-9386-057202b69efd\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.201601 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c7ecce4-d359-486f-9386-057202b69efd-config-data\") pod \"3c7ecce4-d359-486f-9386-057202b69efd\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.201640 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4q4v\" (UniqueName: \"kubernetes.io/projected/3c7ecce4-d359-486f-9386-057202b69efd-kube-api-access-s4q4v\") pod \"3c7ecce4-d359-486f-9386-057202b69efd\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.201777 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c7ecce4-d359-486f-9386-057202b69efd-plugins-conf\") pod \"3c7ecce4-d359-486f-9386-057202b69efd\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.201887 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c7ecce4-d359-486f-9386-057202b69efd-rabbitmq-erlang-cookie\") pod \"3c7ecce4-d359-486f-9386-057202b69efd\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.201933 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c7ecce4-d359-486f-9386-057202b69efd-server-conf\") pod \"3c7ecce4-d359-486f-9386-057202b69efd\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.201980 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c7ecce4-d359-486f-9386-057202b69efd-rabbitmq-plugins\") pod \"3c7ecce4-d359-486f-9386-057202b69efd\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.202064 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c7ecce4-d359-486f-9386-057202b69efd-rabbitmq-confd\") pod \"3c7ecce4-d359-486f-9386-057202b69efd\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.202122 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c7ecce4-d359-486f-9386-057202b69efd-pod-info\") pod \"3c7ecce4-d359-486f-9386-057202b69efd\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.202164 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"3c7ecce4-d359-486f-9386-057202b69efd\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.202197 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c7ecce4-d359-486f-9386-057202b69efd-rabbitmq-tls\") pod \"3c7ecce4-d359-486f-9386-057202b69efd\" (UID: \"3c7ecce4-d359-486f-9386-057202b69efd\") " Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.203452 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c7ecce4-d359-486f-9386-057202b69efd-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "3c7ecce4-d359-486f-9386-057202b69efd" (UID: "3c7ecce4-d359-486f-9386-057202b69efd"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.203640 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c7ecce4-d359-486f-9386-057202b69efd-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "3c7ecce4-d359-486f-9386-057202b69efd" (UID: "3c7ecce4-d359-486f-9386-057202b69efd"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.204008 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c7ecce4-d359-486f-9386-057202b69efd-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "3c7ecce4-d359-486f-9386-057202b69efd" (UID: "3c7ecce4-d359-486f-9386-057202b69efd"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.209459 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c7ecce4-d359-486f-9386-057202b69efd-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "3c7ecce4-d359-486f-9386-057202b69efd" (UID: "3c7ecce4-d359-486f-9386-057202b69efd"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.209459 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "3c7ecce4-d359-486f-9386-057202b69efd" (UID: "3c7ecce4-d359-486f-9386-057202b69efd"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.209846 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3c7ecce4-d359-486f-9386-057202b69efd-pod-info" (OuterVolumeSpecName: "pod-info") pod "3c7ecce4-d359-486f-9386-057202b69efd" (UID: "3c7ecce4-d359-486f-9386-057202b69efd"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.216885 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c7ecce4-d359-486f-9386-057202b69efd-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "3c7ecce4-d359-486f-9386-057202b69efd" (UID: "3c7ecce4-d359-486f-9386-057202b69efd"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.243344 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c7ecce4-d359-486f-9386-057202b69efd-kube-api-access-s4q4v" (OuterVolumeSpecName: "kube-api-access-s4q4v") pod "3c7ecce4-d359-486f-9386-057202b69efd" (UID: "3c7ecce4-d359-486f-9386-057202b69efd"). InnerVolumeSpecName "kube-api-access-s4q4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.249889 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c7ecce4-d359-486f-9386-057202b69efd-config-data" (OuterVolumeSpecName: "config-data") pod "3c7ecce4-d359-486f-9386-057202b69efd" (UID: "3c7ecce4-d359-486f-9386-057202b69efd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.300446 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c7ecce4-d359-486f-9386-057202b69efd-server-conf" (OuterVolumeSpecName: "server-conf") pod "3c7ecce4-d359-486f-9386-057202b69efd" (UID: "3c7ecce4-d359-486f-9386-057202b69efd"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.305534 4823 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c7ecce4-d359-486f-9386-057202b69efd-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.305803 4823 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c7ecce4-d359-486f-9386-057202b69efd-server-conf\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.305900 4823 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c7ecce4-d359-486f-9386-057202b69efd-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.306027 4823 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c7ecce4-d359-486f-9386-057202b69efd-pod-info\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.306146 4823 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.307526 4823 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c7ecce4-d359-486f-9386-057202b69efd-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.307557 4823 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c7ecce4-d359-486f-9386-057202b69efd-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.307584 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c7ecce4-d359-486f-9386-057202b69efd-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.307595 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4q4v\" (UniqueName: \"kubernetes.io/projected/3c7ecce4-d359-486f-9386-057202b69efd-kube-api-access-s4q4v\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.307607 4823 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c7ecce4-d359-486f-9386-057202b69efd-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.370996 4823 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.410735 4823 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.440353 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c7ecce4-d359-486f-9386-057202b69efd-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "3c7ecce4-d359-486f-9386-057202b69efd" (UID: "3c7ecce4-d359-486f-9386-057202b69efd"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.513045 4823 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c7ecce4-d359-486f-9386-057202b69efd-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.826769 4823 generic.go:334] "Generic (PLEG): container finished" podID="807fbfb1-90fe-4325-a0ac-09b309c77172" containerID="2aa6f450971c1e8a9d97d5edad42e9ba460acafa208bade36d92d0f33d45a43d" exitCode=0 Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.826861 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"807fbfb1-90fe-4325-a0ac-09b309c77172","Type":"ContainerDied","Data":"2aa6f450971c1e8a9d97d5edad42e9ba460acafa208bade36d92d0f33d45a43d"} Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.829809 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3c7ecce4-d359-486f-9386-057202b69efd","Type":"ContainerDied","Data":"3d89300eb588a7ae70f6ff08e7c1b1382b862f8f83efaa74acb638169e50888a"} Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.829877 4823 scope.go:117] "RemoveContainer" containerID="9a13cf91bdfbbbe373c5175e3d6d15d45934ed383cd43480359bc8464e82914c" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.829909 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.864992 4823 scope.go:117] "RemoveContainer" containerID="9957dae732164f5c67fc2695ec4e15ce678f7bfdaa4f20525e0b217e88ca4f3e" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.928894 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.958972 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.986916 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 06:51:14 crc kubenswrapper[4823]: E1206 06:51:14.987633 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c7ecce4-d359-486f-9386-057202b69efd" containerName="setup-container" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.987685 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c7ecce4-d359-486f-9386-057202b69efd" containerName="setup-container" Dec 06 06:51:14 crc kubenswrapper[4823]: E1206 06:51:14.987707 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c7ecce4-d359-486f-9386-057202b69efd" containerName="rabbitmq" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.987717 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c7ecce4-d359-486f-9386-057202b69efd" containerName="rabbitmq" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.988012 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c7ecce4-d359-486f-9386-057202b69efd" containerName="rabbitmq" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.989793 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.993252 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.993522 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.994238 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.994501 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.994889 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.995070 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-jp4j4" Dec 06 06:51:14 crc kubenswrapper[4823]: I1206 06:51:14.995190 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.005333 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.134923 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.141402 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrlhw\" (UniqueName: \"kubernetes.io/projected/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-kube-api-access-rrlhw\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.141716 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.141755 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.141784 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.142030 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.142162 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.142195 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.142367 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.142418 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-config-data\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.142506 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.142673 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.159941 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c7ecce4-d359-486f-9386-057202b69efd" path="/var/lib/kubelet/pods/3c7ecce4-d359-486f-9386-057202b69efd/volumes" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.244531 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/807fbfb1-90fe-4325-a0ac-09b309c77172-config-data\") pod \"807fbfb1-90fe-4325-a0ac-09b309c77172\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.244647 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/807fbfb1-90fe-4325-a0ac-09b309c77172-rabbitmq-erlang-cookie\") pod \"807fbfb1-90fe-4325-a0ac-09b309c77172\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.244757 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/807fbfb1-90fe-4325-a0ac-09b309c77172-rabbitmq-confd\") pod \"807fbfb1-90fe-4325-a0ac-09b309c77172\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.244809 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/807fbfb1-90fe-4325-a0ac-09b309c77172-rabbitmq-plugins\") pod \"807fbfb1-90fe-4325-a0ac-09b309c77172\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.244868 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/807fbfb1-90fe-4325-a0ac-09b309c77172-pod-info\") pod \"807fbfb1-90fe-4325-a0ac-09b309c77172\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.244894 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/807fbfb1-90fe-4325-a0ac-09b309c77172-plugins-conf\") pod \"807fbfb1-90fe-4325-a0ac-09b309c77172\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.244917 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"807fbfb1-90fe-4325-a0ac-09b309c77172\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.245029 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/807fbfb1-90fe-4325-a0ac-09b309c77172-rabbitmq-tls\") pod \"807fbfb1-90fe-4325-a0ac-09b309c77172\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.245090 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/807fbfb1-90fe-4325-a0ac-09b309c77172-server-conf\") pod \"807fbfb1-90fe-4325-a0ac-09b309c77172\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.245147 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fvrw\" (UniqueName: \"kubernetes.io/projected/807fbfb1-90fe-4325-a0ac-09b309c77172-kube-api-access-4fvrw\") pod \"807fbfb1-90fe-4325-a0ac-09b309c77172\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.245211 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/807fbfb1-90fe-4325-a0ac-09b309c77172-erlang-cookie-secret\") pod \"807fbfb1-90fe-4325-a0ac-09b309c77172\" (UID: \"807fbfb1-90fe-4325-a0ac-09b309c77172\") " Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.245702 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrlhw\" (UniqueName: \"kubernetes.io/projected/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-kube-api-access-rrlhw\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.245740 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.245782 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.245810 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.245899 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.245953 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.245973 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.246062 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/807fbfb1-90fe-4325-a0ac-09b309c77172-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "807fbfb1-90fe-4325-a0ac-09b309c77172" (UID: "807fbfb1-90fe-4325-a0ac-09b309c77172"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.246076 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.246107 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-config-data\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.246173 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.246241 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.246337 4823 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/807fbfb1-90fe-4325-a0ac-09b309c77172-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.248403 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/807fbfb1-90fe-4325-a0ac-09b309c77172-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "807fbfb1-90fe-4325-a0ac-09b309c77172" (UID: "807fbfb1-90fe-4325-a0ac-09b309c77172"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.249349 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.249580 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.258034 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-config-data\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.259314 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/807fbfb1-90fe-4325-a0ac-09b309c77172-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "807fbfb1-90fe-4325-a0ac-09b309c77172" (UID: "807fbfb1-90fe-4325-a0ac-09b309c77172"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.260350 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.260450 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/807fbfb1-90fe-4325-a0ac-09b309c77172-pod-info" (OuterVolumeSpecName: "pod-info") pod "807fbfb1-90fe-4325-a0ac-09b309c77172" (UID: "807fbfb1-90fe-4325-a0ac-09b309c77172"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.275226 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.275293 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.275470 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/807fbfb1-90fe-4325-a0ac-09b309c77172-kube-api-access-4fvrw" (OuterVolumeSpecName: "kube-api-access-4fvrw") pod "807fbfb1-90fe-4325-a0ac-09b309c77172" (UID: "807fbfb1-90fe-4325-a0ac-09b309c77172"). InnerVolumeSpecName "kube-api-access-4fvrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.275915 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.276068 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/807fbfb1-90fe-4325-a0ac-09b309c77172-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "807fbfb1-90fe-4325-a0ac-09b309c77172" (UID: "807fbfb1-90fe-4325-a0ac-09b309c77172"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.276425 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.277165 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "807fbfb1-90fe-4325-a0ac-09b309c77172" (UID: "807fbfb1-90fe-4325-a0ac-09b309c77172"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.284074 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/807fbfb1-90fe-4325-a0ac-09b309c77172-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "807fbfb1-90fe-4325-a0ac-09b309c77172" (UID: "807fbfb1-90fe-4325-a0ac-09b309c77172"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.290061 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.294471 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrlhw\" (UniqueName: \"kubernetes.io/projected/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-kube-api-access-rrlhw\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.321106 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/807fbfb1-90fe-4325-a0ac-09b309c77172-config-data" (OuterVolumeSpecName: "config-data") pod "807fbfb1-90fe-4325-a0ac-09b309c77172" (UID: "807fbfb1-90fe-4325-a0ac-09b309c77172"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.321122 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.349597 4823 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/807fbfb1-90fe-4325-a0ac-09b309c77172-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.349640 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fvrw\" (UniqueName: \"kubernetes.io/projected/807fbfb1-90fe-4325-a0ac-09b309c77172-kube-api-access-4fvrw\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.353707 4823 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/807fbfb1-90fe-4325-a0ac-09b309c77172-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.353744 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/807fbfb1-90fe-4325-a0ac-09b309c77172-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.353757 4823 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/807fbfb1-90fe-4325-a0ac-09b309c77172-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.353766 4823 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/807fbfb1-90fe-4325-a0ac-09b309c77172-pod-info\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.353778 4823 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/807fbfb1-90fe-4325-a0ac-09b309c77172-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.353809 4823 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.392046 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7\") " pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.437641 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/807fbfb1-90fe-4325-a0ac-09b309c77172-server-conf" (OuterVolumeSpecName: "server-conf") pod "807fbfb1-90fe-4325-a0ac-09b309c77172" (UID: "807fbfb1-90fe-4325-a0ac-09b309c77172"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.442177 4823 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.456403 4823 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.456714 4823 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/807fbfb1-90fe-4325-a0ac-09b309c77172-server-conf\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.473792 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/807fbfb1-90fe-4325-a0ac-09b309c77172-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "807fbfb1-90fe-4325-a0ac-09b309c77172" (UID: "807fbfb1-90fe-4325-a0ac-09b309c77172"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.558618 4823 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/807fbfb1-90fe-4325-a0ac-09b309c77172-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.625302 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.911243 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"807fbfb1-90fe-4325-a0ac-09b309c77172","Type":"ContainerDied","Data":"1ac9c0622b71d63b84e250947f1414dba4794b8cd98151e9987bede9c843a77c"} Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.911922 4823 scope.go:117] "RemoveContainer" containerID="2aa6f450971c1e8a9d97d5edad42e9ba460acafa208bade36d92d0f33d45a43d" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.912185 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.965049 4823 scope.go:117] "RemoveContainer" containerID="7a5bed63e275100585b394ef20b463de889ed8df1d45c00b86053347a156377a" Dec 06 06:51:15 crc kubenswrapper[4823]: I1206 06:51:15.998684 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.055964 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.071791 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 06:51:16 crc kubenswrapper[4823]: E1206 06:51:16.072462 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="807fbfb1-90fe-4325-a0ac-09b309c77172" containerName="rabbitmq" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.072488 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="807fbfb1-90fe-4325-a0ac-09b309c77172" containerName="rabbitmq" Dec 06 06:51:16 crc kubenswrapper[4823]: E1206 06:51:16.072504 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="807fbfb1-90fe-4325-a0ac-09b309c77172" containerName="setup-container" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.072512 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="807fbfb1-90fe-4325-a0ac-09b309c77172" containerName="setup-container" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.072812 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="807fbfb1-90fe-4325-a0ac-09b309c77172" containerName="rabbitmq" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.074476 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.078058 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.079043 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.079277 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.079445 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.079547 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.079624 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-svzwv" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.079653 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.096542 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.171252 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.171822 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/98d25d92-00b6-4897-b3df-0976c9c3a8eb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.171871 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/98d25d92-00b6-4897-b3df-0976c9c3a8eb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.171914 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/98d25d92-00b6-4897-b3df-0976c9c3a8eb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.172038 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gd8v\" (UniqueName: \"kubernetes.io/projected/98d25d92-00b6-4897-b3df-0976c9c3a8eb-kube-api-access-6gd8v\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.172078 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/98d25d92-00b6-4897-b3df-0976c9c3a8eb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.172124 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/98d25d92-00b6-4897-b3df-0976c9c3a8eb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.172157 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/98d25d92-00b6-4897-b3df-0976c9c3a8eb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.172194 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.172214 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/98d25d92-00b6-4897-b3df-0976c9c3a8eb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.172237 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/98d25d92-00b6-4897-b3df-0976c9c3a8eb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.172289 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/98d25d92-00b6-4897-b3df-0976c9c3a8eb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.274690 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/98d25d92-00b6-4897-b3df-0976c9c3a8eb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.274763 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/98d25d92-00b6-4897-b3df-0976c9c3a8eb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.274826 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/98d25d92-00b6-4897-b3df-0976c9c3a8eb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.274980 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gd8v\" (UniqueName: \"kubernetes.io/projected/98d25d92-00b6-4897-b3df-0976c9c3a8eb-kube-api-access-6gd8v\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.275012 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/98d25d92-00b6-4897-b3df-0976c9c3a8eb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.275092 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/98d25d92-00b6-4897-b3df-0976c9c3a8eb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.275120 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/98d25d92-00b6-4897-b3df-0976c9c3a8eb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.275193 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.275227 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/98d25d92-00b6-4897-b3df-0976c9c3a8eb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.275284 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/98d25d92-00b6-4897-b3df-0976c9c3a8eb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.275534 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/98d25d92-00b6-4897-b3df-0976c9c3a8eb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.275891 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/98d25d92-00b6-4897-b3df-0976c9c3a8eb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.275986 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.276172 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/98d25d92-00b6-4897-b3df-0976c9c3a8eb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.277505 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/98d25d92-00b6-4897-b3df-0976c9c3a8eb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.279066 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/98d25d92-00b6-4897-b3df-0976c9c3a8eb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.279271 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/98d25d92-00b6-4897-b3df-0976c9c3a8eb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.281219 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/98d25d92-00b6-4897-b3df-0976c9c3a8eb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.281314 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/98d25d92-00b6-4897-b3df-0976c9c3a8eb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.282013 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/98d25d92-00b6-4897-b3df-0976c9c3a8eb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.283381 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/98d25d92-00b6-4897-b3df-0976c9c3a8eb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.298094 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gd8v\" (UniqueName: \"kubernetes.io/projected/98d25d92-00b6-4897-b3df-0976c9c3a8eb-kube-api-access-6gd8v\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.332632 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"98d25d92-00b6-4897-b3df-0976c9c3a8eb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.413098 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:16 crc kubenswrapper[4823]: I1206 06:51:16.946912 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7","Type":"ContainerStarted","Data":"c01b83105944db8635ef19c04a2704656773b0ff4def192d5351b4166c66b010"} Dec 06 06:51:17 crc kubenswrapper[4823]: I1206 06:51:17.098443 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 06:51:17 crc kubenswrapper[4823]: I1206 06:51:17.158262 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="807fbfb1-90fe-4325-a0ac-09b309c77172" path="/var/lib/kubelet/pods/807fbfb1-90fe-4325-a0ac-09b309c77172/volumes" Dec 06 06:51:17 crc kubenswrapper[4823]: I1206 06:51:17.959114 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"98d25d92-00b6-4897-b3df-0976c9c3a8eb","Type":"ContainerStarted","Data":"920e0c1da585b6499b16bc4d1872e2fd3a1dad6f8062f3d6006d1e57c6039385"} Dec 06 06:51:18 crc kubenswrapper[4823]: I1206 06:51:18.973694 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"98d25d92-00b6-4897-b3df-0976c9c3a8eb","Type":"ContainerStarted","Data":"4e7c6a9fa8c1ee338eecf239c1679d87da7eca8d0d63126c67401d46e854a042"} Dec 06 06:51:18 crc kubenswrapper[4823]: I1206 06:51:18.976703 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7","Type":"ContainerStarted","Data":"a71c88c99b21565443b8612e26819972c10024e6dabf73de72d19713101f570d"} Dec 06 06:51:24 crc kubenswrapper[4823]: I1206 06:51:24.991087 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f56474995-wbdpn"] Dec 06 06:51:24 crc kubenswrapper[4823]: I1206 06:51:24.994188 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:24 crc kubenswrapper[4823]: I1206 06:51:24.998468 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 06 06:51:25 crc kubenswrapper[4823]: I1206 06:51:25.018254 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f56474995-wbdpn"] Dec 06 06:51:25 crc kubenswrapper[4823]: I1206 06:51:25.090384 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-dns-svc\") pod \"dnsmasq-dns-f56474995-wbdpn\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:25 crc kubenswrapper[4823]: I1206 06:51:25.090475 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-ovsdbserver-nb\") pod \"dnsmasq-dns-f56474995-wbdpn\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:25 crc kubenswrapper[4823]: I1206 06:51:25.090506 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-config\") pod \"dnsmasq-dns-f56474995-wbdpn\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:25 crc kubenswrapper[4823]: I1206 06:51:25.090593 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-dns-swift-storage-0\") pod \"dnsmasq-dns-f56474995-wbdpn\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:25 crc kubenswrapper[4823]: I1206 06:51:25.090611 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrhv9\" (UniqueName: \"kubernetes.io/projected/311efc88-9e03-4b89-bf75-c1e64f9873f1-kube-api-access-mrhv9\") pod \"dnsmasq-dns-f56474995-wbdpn\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:25 crc kubenswrapper[4823]: I1206 06:51:25.090642 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-openstack-edpm-ipam\") pod \"dnsmasq-dns-f56474995-wbdpn\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:25 crc kubenswrapper[4823]: I1206 06:51:25.090681 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-ovsdbserver-sb\") pod \"dnsmasq-dns-f56474995-wbdpn\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:25 crc kubenswrapper[4823]: I1206 06:51:25.193033 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-dns-swift-storage-0\") pod \"dnsmasq-dns-f56474995-wbdpn\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:25 crc kubenswrapper[4823]: I1206 06:51:25.193378 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrhv9\" (UniqueName: \"kubernetes.io/projected/311efc88-9e03-4b89-bf75-c1e64f9873f1-kube-api-access-mrhv9\") pod \"dnsmasq-dns-f56474995-wbdpn\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:25 crc kubenswrapper[4823]: I1206 06:51:25.193822 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-openstack-edpm-ipam\") pod \"dnsmasq-dns-f56474995-wbdpn\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:25 crc kubenswrapper[4823]: I1206 06:51:25.193932 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-ovsdbserver-sb\") pod \"dnsmasq-dns-f56474995-wbdpn\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:25 crc kubenswrapper[4823]: I1206 06:51:25.194107 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-dns-svc\") pod \"dnsmasq-dns-f56474995-wbdpn\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:25 crc kubenswrapper[4823]: I1206 06:51:25.194330 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-ovsdbserver-nb\") pod \"dnsmasq-dns-f56474995-wbdpn\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:25 crc kubenswrapper[4823]: I1206 06:51:25.194374 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-config\") pod \"dnsmasq-dns-f56474995-wbdpn\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:25 crc kubenswrapper[4823]: I1206 06:51:25.196833 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-config\") pod \"dnsmasq-dns-f56474995-wbdpn\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:25 crc kubenswrapper[4823]: I1206 06:51:25.194105 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-dns-swift-storage-0\") pod \"dnsmasq-dns-f56474995-wbdpn\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:25 crc kubenswrapper[4823]: I1206 06:51:25.197802 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-ovsdbserver-sb\") pod \"dnsmasq-dns-f56474995-wbdpn\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:25 crc kubenswrapper[4823]: I1206 06:51:25.197827 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-openstack-edpm-ipam\") pod \"dnsmasq-dns-f56474995-wbdpn\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:25 crc kubenswrapper[4823]: I1206 06:51:25.198022 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-ovsdbserver-nb\") pod \"dnsmasq-dns-f56474995-wbdpn\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:25 crc kubenswrapper[4823]: I1206 06:51:25.198422 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-dns-svc\") pod \"dnsmasq-dns-f56474995-wbdpn\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:25 crc kubenswrapper[4823]: I1206 06:51:25.247183 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrhv9\" (UniqueName: \"kubernetes.io/projected/311efc88-9e03-4b89-bf75-c1e64f9873f1-kube-api-access-mrhv9\") pod \"dnsmasq-dns-f56474995-wbdpn\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:25 crc kubenswrapper[4823]: I1206 06:51:25.315622 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:25 crc kubenswrapper[4823]: I1206 06:51:25.916315 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f56474995-wbdpn"] Dec 06 06:51:26 crc kubenswrapper[4823]: I1206 06:51:26.068880 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f56474995-wbdpn" event={"ID":"311efc88-9e03-4b89-bf75-c1e64f9873f1","Type":"ContainerStarted","Data":"63d4665c198a93337473b50a483f562aae39daa6db46c7032b4e51acab14731b"} Dec 06 06:51:27 crc kubenswrapper[4823]: I1206 06:51:27.080589 4823 generic.go:334] "Generic (PLEG): container finished" podID="311efc88-9e03-4b89-bf75-c1e64f9873f1" containerID="5fc11ea57edb209c9459412eb4cdb4071f99063c9e79888b9ea3f358bc92217f" exitCode=0 Dec 06 06:51:27 crc kubenswrapper[4823]: I1206 06:51:27.080707 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f56474995-wbdpn" event={"ID":"311efc88-9e03-4b89-bf75-c1e64f9873f1","Type":"ContainerDied","Data":"5fc11ea57edb209c9459412eb4cdb4071f99063c9e79888b9ea3f358bc92217f"} Dec 06 06:51:28 crc kubenswrapper[4823]: I1206 06:51:28.094593 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f56474995-wbdpn" event={"ID":"311efc88-9e03-4b89-bf75-c1e64f9873f1","Type":"ContainerStarted","Data":"91e469cd349a94e6cc10021d6b5ac2cb85c4c6ad3898b9805bc2ab605dc98406"} Dec 06 06:51:28 crc kubenswrapper[4823]: I1206 06:51:28.094974 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:28 crc kubenswrapper[4823]: I1206 06:51:28.114587 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f56474995-wbdpn" podStartSLOduration=4.114555975 podStartE2EDuration="4.114555975s" podCreationTimestamp="2025-12-06 06:51:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:51:28.110287431 +0000 UTC m=+1589.396039401" watchObservedRunningTime="2025-12-06 06:51:28.114555975 +0000 UTC m=+1589.400307935" Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.318915 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.405641 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-657f5df845-fq9wm"] Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.406058 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-657f5df845-fq9wm" podUID="d5160c40-0e83-445f-bf12-b4530306aaaf" containerName="dnsmasq-dns" containerID="cri-o://80872e8a676300b792c7a704c56e4f8858e7092060cae17557facf47168c280d" gracePeriod=10 Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.568224 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-755bd4b5c7-97dgz"] Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.572793 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.584598 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-755bd4b5c7-97dgz"] Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.672687 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bpc5\" (UniqueName: \"kubernetes.io/projected/709c2986-1fcb-419b-9d05-2afed5c1542b-kube-api-access-4bpc5\") pod \"dnsmasq-dns-755bd4b5c7-97dgz\" (UID: \"709c2986-1fcb-419b-9d05-2afed5c1542b\") " pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.672738 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/709c2986-1fcb-419b-9d05-2afed5c1542b-dns-svc\") pod \"dnsmasq-dns-755bd4b5c7-97dgz\" (UID: \"709c2986-1fcb-419b-9d05-2afed5c1542b\") " pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.672763 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/709c2986-1fcb-419b-9d05-2afed5c1542b-ovsdbserver-sb\") pod \"dnsmasq-dns-755bd4b5c7-97dgz\" (UID: \"709c2986-1fcb-419b-9d05-2afed5c1542b\") " pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.672812 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/709c2986-1fcb-419b-9d05-2afed5c1542b-ovsdbserver-nb\") pod \"dnsmasq-dns-755bd4b5c7-97dgz\" (UID: \"709c2986-1fcb-419b-9d05-2afed5c1542b\") " pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.672917 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/709c2986-1fcb-419b-9d05-2afed5c1542b-openstack-edpm-ipam\") pod \"dnsmasq-dns-755bd4b5c7-97dgz\" (UID: \"709c2986-1fcb-419b-9d05-2afed5c1542b\") " pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.673022 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709c2986-1fcb-419b-9d05-2afed5c1542b-config\") pod \"dnsmasq-dns-755bd4b5c7-97dgz\" (UID: \"709c2986-1fcb-419b-9d05-2afed5c1542b\") " pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.673060 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/709c2986-1fcb-419b-9d05-2afed5c1542b-dns-swift-storage-0\") pod \"dnsmasq-dns-755bd4b5c7-97dgz\" (UID: \"709c2986-1fcb-419b-9d05-2afed5c1542b\") " pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.775155 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/709c2986-1fcb-419b-9d05-2afed5c1542b-dns-svc\") pod \"dnsmasq-dns-755bd4b5c7-97dgz\" (UID: \"709c2986-1fcb-419b-9d05-2afed5c1542b\") " pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.775228 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/709c2986-1fcb-419b-9d05-2afed5c1542b-ovsdbserver-sb\") pod \"dnsmasq-dns-755bd4b5c7-97dgz\" (UID: \"709c2986-1fcb-419b-9d05-2afed5c1542b\") " pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.775265 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/709c2986-1fcb-419b-9d05-2afed5c1542b-ovsdbserver-nb\") pod \"dnsmasq-dns-755bd4b5c7-97dgz\" (UID: \"709c2986-1fcb-419b-9d05-2afed5c1542b\") " pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.775366 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/709c2986-1fcb-419b-9d05-2afed5c1542b-openstack-edpm-ipam\") pod \"dnsmasq-dns-755bd4b5c7-97dgz\" (UID: \"709c2986-1fcb-419b-9d05-2afed5c1542b\") " pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.775470 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709c2986-1fcb-419b-9d05-2afed5c1542b-config\") pod \"dnsmasq-dns-755bd4b5c7-97dgz\" (UID: \"709c2986-1fcb-419b-9d05-2afed5c1542b\") " pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.775507 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/709c2986-1fcb-419b-9d05-2afed5c1542b-dns-swift-storage-0\") pod \"dnsmasq-dns-755bd4b5c7-97dgz\" (UID: \"709c2986-1fcb-419b-9d05-2afed5c1542b\") " pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.775598 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bpc5\" (UniqueName: \"kubernetes.io/projected/709c2986-1fcb-419b-9d05-2afed5c1542b-kube-api-access-4bpc5\") pod \"dnsmasq-dns-755bd4b5c7-97dgz\" (UID: \"709c2986-1fcb-419b-9d05-2afed5c1542b\") " pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.776411 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/709c2986-1fcb-419b-9d05-2afed5c1542b-ovsdbserver-sb\") pod \"dnsmasq-dns-755bd4b5c7-97dgz\" (UID: \"709c2986-1fcb-419b-9d05-2afed5c1542b\") " pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.776472 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/709c2986-1fcb-419b-9d05-2afed5c1542b-dns-svc\") pod \"dnsmasq-dns-755bd4b5c7-97dgz\" (UID: \"709c2986-1fcb-419b-9d05-2afed5c1542b\") " pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.777275 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/709c2986-1fcb-419b-9d05-2afed5c1542b-ovsdbserver-nb\") pod \"dnsmasq-dns-755bd4b5c7-97dgz\" (UID: \"709c2986-1fcb-419b-9d05-2afed5c1542b\") " pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.777286 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/709c2986-1fcb-419b-9d05-2afed5c1542b-dns-swift-storage-0\") pod \"dnsmasq-dns-755bd4b5c7-97dgz\" (UID: \"709c2986-1fcb-419b-9d05-2afed5c1542b\") " pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.778177 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709c2986-1fcb-419b-9d05-2afed5c1542b-config\") pod \"dnsmasq-dns-755bd4b5c7-97dgz\" (UID: \"709c2986-1fcb-419b-9d05-2afed5c1542b\") " pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.779257 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/709c2986-1fcb-419b-9d05-2afed5c1542b-openstack-edpm-ipam\") pod \"dnsmasq-dns-755bd4b5c7-97dgz\" (UID: \"709c2986-1fcb-419b-9d05-2afed5c1542b\") " pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.810059 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bpc5\" (UniqueName: \"kubernetes.io/projected/709c2986-1fcb-419b-9d05-2afed5c1542b-kube-api-access-4bpc5\") pod \"dnsmasq-dns-755bd4b5c7-97dgz\" (UID: \"709c2986-1fcb-419b-9d05-2afed5c1542b\") " pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" Dec 06 06:51:35 crc kubenswrapper[4823]: I1206 06:51:35.917433 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.052449 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.052532 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.052593 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.053967 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423"} pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.055743 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" containerID="cri-o://129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" gracePeriod=600 Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.130936 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-657f5df845-fq9wm" Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.184404 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k5mw\" (UniqueName: \"kubernetes.io/projected/d5160c40-0e83-445f-bf12-b4530306aaaf-kube-api-access-9k5mw\") pod \"d5160c40-0e83-445f-bf12-b4530306aaaf\" (UID: \"d5160c40-0e83-445f-bf12-b4530306aaaf\") " Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.184679 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-dns-swift-storage-0\") pod \"d5160c40-0e83-445f-bf12-b4530306aaaf\" (UID: \"d5160c40-0e83-445f-bf12-b4530306aaaf\") " Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.184729 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-config\") pod \"d5160c40-0e83-445f-bf12-b4530306aaaf\" (UID: \"d5160c40-0e83-445f-bf12-b4530306aaaf\") " Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.184781 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-ovsdbserver-sb\") pod \"d5160c40-0e83-445f-bf12-b4530306aaaf\" (UID: \"d5160c40-0e83-445f-bf12-b4530306aaaf\") " Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.184851 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-dns-svc\") pod \"d5160c40-0e83-445f-bf12-b4530306aaaf\" (UID: \"d5160c40-0e83-445f-bf12-b4530306aaaf\") " Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.184950 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-ovsdbserver-nb\") pod \"d5160c40-0e83-445f-bf12-b4530306aaaf\" (UID: \"d5160c40-0e83-445f-bf12-b4530306aaaf\") " Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.230061 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5160c40-0e83-445f-bf12-b4530306aaaf-kube-api-access-9k5mw" (OuterVolumeSpecName: "kube-api-access-9k5mw") pod "d5160c40-0e83-445f-bf12-b4530306aaaf" (UID: "d5160c40-0e83-445f-bf12-b4530306aaaf"). InnerVolumeSpecName "kube-api-access-9k5mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.236617 4823 generic.go:334] "Generic (PLEG): container finished" podID="d5160c40-0e83-445f-bf12-b4530306aaaf" containerID="80872e8a676300b792c7a704c56e4f8858e7092060cae17557facf47168c280d" exitCode=0 Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.237023 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-657f5df845-fq9wm" Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.237026 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-657f5df845-fq9wm" event={"ID":"d5160c40-0e83-445f-bf12-b4530306aaaf","Type":"ContainerDied","Data":"80872e8a676300b792c7a704c56e4f8858e7092060cae17557facf47168c280d"} Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.237131 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-657f5df845-fq9wm" event={"ID":"d5160c40-0e83-445f-bf12-b4530306aaaf","Type":"ContainerDied","Data":"903e5a2bdc829027ba499ae5d81d33228f4edb304e51d6d075154e8aa6ba968c"} Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.237190 4823 scope.go:117] "RemoveContainer" containerID="80872e8a676300b792c7a704c56e4f8858e7092060cae17557facf47168c280d" Dec 06 06:51:36 crc kubenswrapper[4823]: E1206 06:51:36.244640 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.277224 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-config" (OuterVolumeSpecName: "config") pod "d5160c40-0e83-445f-bf12-b4530306aaaf" (UID: "d5160c40-0e83-445f-bf12-b4530306aaaf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.288731 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d5160c40-0e83-445f-bf12-b4530306aaaf" (UID: "d5160c40-0e83-445f-bf12-b4530306aaaf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.289230 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.289329 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.289421 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k5mw\" (UniqueName: \"kubernetes.io/projected/d5160c40-0e83-445f-bf12-b4530306aaaf-kube-api-access-9k5mw\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.291483 4823 scope.go:117] "RemoveContainer" containerID="cd5255edae4d79d8fb53f18dc01a0c219e39c35f6a408d15a5b0dadae8c5b1d6" Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.297363 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d5160c40-0e83-445f-bf12-b4530306aaaf" (UID: "d5160c40-0e83-445f-bf12-b4530306aaaf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.304197 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d5160c40-0e83-445f-bf12-b4530306aaaf" (UID: "d5160c40-0e83-445f-bf12-b4530306aaaf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.317911 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d5160c40-0e83-445f-bf12-b4530306aaaf" (UID: "d5160c40-0e83-445f-bf12-b4530306aaaf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.333052 4823 scope.go:117] "RemoveContainer" containerID="80872e8a676300b792c7a704c56e4f8858e7092060cae17557facf47168c280d" Dec 06 06:51:36 crc kubenswrapper[4823]: E1206 06:51:36.333619 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80872e8a676300b792c7a704c56e4f8858e7092060cae17557facf47168c280d\": container with ID starting with 80872e8a676300b792c7a704c56e4f8858e7092060cae17557facf47168c280d not found: ID does not exist" containerID="80872e8a676300b792c7a704c56e4f8858e7092060cae17557facf47168c280d" Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.333687 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80872e8a676300b792c7a704c56e4f8858e7092060cae17557facf47168c280d"} err="failed to get container status \"80872e8a676300b792c7a704c56e4f8858e7092060cae17557facf47168c280d\": rpc error: code = NotFound desc = could not find container \"80872e8a676300b792c7a704c56e4f8858e7092060cae17557facf47168c280d\": container with ID starting with 80872e8a676300b792c7a704c56e4f8858e7092060cae17557facf47168c280d not found: ID does not exist" Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.333719 4823 scope.go:117] "RemoveContainer" containerID="cd5255edae4d79d8fb53f18dc01a0c219e39c35f6a408d15a5b0dadae8c5b1d6" Dec 06 06:51:36 crc kubenswrapper[4823]: E1206 06:51:36.333988 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd5255edae4d79d8fb53f18dc01a0c219e39c35f6a408d15a5b0dadae8c5b1d6\": container with ID starting with cd5255edae4d79d8fb53f18dc01a0c219e39c35f6a408d15a5b0dadae8c5b1d6 not found: ID does not exist" containerID="cd5255edae4d79d8fb53f18dc01a0c219e39c35f6a408d15a5b0dadae8c5b1d6" Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.334033 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd5255edae4d79d8fb53f18dc01a0c219e39c35f6a408d15a5b0dadae8c5b1d6"} err="failed to get container status \"cd5255edae4d79d8fb53f18dc01a0c219e39c35f6a408d15a5b0dadae8c5b1d6\": rpc error: code = NotFound desc = could not find container \"cd5255edae4d79d8fb53f18dc01a0c219e39c35f6a408d15a5b0dadae8c5b1d6\": container with ID starting with cd5255edae4d79d8fb53f18dc01a0c219e39c35f6a408d15a5b0dadae8c5b1d6 not found: ID does not exist" Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.391873 4823 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.391919 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.391936 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5160c40-0e83-445f-bf12-b4530306aaaf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.426701 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-755bd4b5c7-97dgz"] Dec 06 06:51:36 crc kubenswrapper[4823]: W1206 06:51:36.429343 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod709c2986_1fcb_419b_9d05_2afed5c1542b.slice/crio-64a40a91ef7ec8ecd4a3fa963b56e5df4203fc58891469cf956ba0cd7bc42dd2 WatchSource:0}: Error finding container 64a40a91ef7ec8ecd4a3fa963b56e5df4203fc58891469cf956ba0cd7bc42dd2: Status 404 returned error can't find the container with id 64a40a91ef7ec8ecd4a3fa963b56e5df4203fc58891469cf956ba0cd7bc42dd2 Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.728514 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-657f5df845-fq9wm"] Dec 06 06:51:36 crc kubenswrapper[4823]: I1206 06:51:36.741795 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-657f5df845-fq9wm"] Dec 06 06:51:37 crc kubenswrapper[4823]: I1206 06:51:37.153345 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5160c40-0e83-445f-bf12-b4530306aaaf" path="/var/lib/kubelet/pods/d5160c40-0e83-445f-bf12-b4530306aaaf/volumes" Dec 06 06:51:37 crc kubenswrapper[4823]: I1206 06:51:37.253375 4823 generic.go:334] "Generic (PLEG): container finished" podID="709c2986-1fcb-419b-9d05-2afed5c1542b" containerID="7af2c8a604e23a65501842555e28e39d745dfafc6a2adf938fee2983d17b9d31" exitCode=0 Dec 06 06:51:37 crc kubenswrapper[4823]: I1206 06:51:37.253458 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" event={"ID":"709c2986-1fcb-419b-9d05-2afed5c1542b","Type":"ContainerDied","Data":"7af2c8a604e23a65501842555e28e39d745dfafc6a2adf938fee2983d17b9d31"} Dec 06 06:51:37 crc kubenswrapper[4823]: I1206 06:51:37.253493 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" event={"ID":"709c2986-1fcb-419b-9d05-2afed5c1542b","Type":"ContainerStarted","Data":"64a40a91ef7ec8ecd4a3fa963b56e5df4203fc58891469cf956ba0cd7bc42dd2"} Dec 06 06:51:37 crc kubenswrapper[4823]: I1206 06:51:37.276272 4823 generic.go:334] "Generic (PLEG): container finished" podID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" exitCode=0 Dec 06 06:51:37 crc kubenswrapper[4823]: I1206 06:51:37.276488 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerDied","Data":"129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423"} Dec 06 06:51:37 crc kubenswrapper[4823]: I1206 06:51:37.282708 4823 scope.go:117] "RemoveContainer" containerID="cf0da5e873b0675ce3affbf1aff07940b681c1bb20491ade8083d807561c411f" Dec 06 06:51:37 crc kubenswrapper[4823]: I1206 06:51:37.284922 4823 scope.go:117] "RemoveContainer" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" Dec 06 06:51:37 crc kubenswrapper[4823]: E1206 06:51:37.286029 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 06:51:38 crc kubenswrapper[4823]: I1206 06:51:38.291502 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" event={"ID":"709c2986-1fcb-419b-9d05-2afed5c1542b","Type":"ContainerStarted","Data":"7261c9f435345bb2c36d76160f1f5f58b73f98fc24a5488c22f95a6f17337a89"} Dec 06 06:51:38 crc kubenswrapper[4823]: I1206 06:51:38.292008 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" Dec 06 06:51:38 crc kubenswrapper[4823]: I1206 06:51:38.317932 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" podStartSLOduration=3.31791038 podStartE2EDuration="3.31791038s" podCreationTimestamp="2025-12-06 06:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:51:38.315004976 +0000 UTC m=+1599.600756936" watchObservedRunningTime="2025-12-06 06:51:38.31791038 +0000 UTC m=+1599.603662340" Dec 06 06:51:40 crc kubenswrapper[4823]: I1206 06:51:40.693343 4823 scope.go:117] "RemoveContainer" containerID="438ae53c9cc8a28fdb62cfcc2dc2f5b24d39e6e6cb1b54d73d8c68fd8eb67b06" Dec 06 06:51:45 crc kubenswrapper[4823]: I1206 06:51:45.919337 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-755bd4b5c7-97dgz" Dec 06 06:51:45 crc kubenswrapper[4823]: I1206 06:51:45.988324 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f56474995-wbdpn"] Dec 06 06:51:45 crc kubenswrapper[4823]: I1206 06:51:45.988754 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f56474995-wbdpn" podUID="311efc88-9e03-4b89-bf75-c1e64f9873f1" containerName="dnsmasq-dns" containerID="cri-o://91e469cd349a94e6cc10021d6b5ac2cb85c4c6ad3898b9805bc2ab605dc98406" gracePeriod=10 Dec 06 06:51:46 crc kubenswrapper[4823]: I1206 06:51:46.376331 4823 generic.go:334] "Generic (PLEG): container finished" podID="311efc88-9e03-4b89-bf75-c1e64f9873f1" containerID="91e469cd349a94e6cc10021d6b5ac2cb85c4c6ad3898b9805bc2ab605dc98406" exitCode=0 Dec 06 06:51:46 crc kubenswrapper[4823]: I1206 06:51:46.376613 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f56474995-wbdpn" event={"ID":"311efc88-9e03-4b89-bf75-c1e64f9873f1","Type":"ContainerDied","Data":"91e469cd349a94e6cc10021d6b5ac2cb85c4c6ad3898b9805bc2ab605dc98406"} Dec 06 06:51:46 crc kubenswrapper[4823]: I1206 06:51:46.377720 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f56474995-wbdpn" event={"ID":"311efc88-9e03-4b89-bf75-c1e64f9873f1","Type":"ContainerDied","Data":"63d4665c198a93337473b50a483f562aae39daa6db46c7032b4e51acab14731b"} Dec 06 06:51:46 crc kubenswrapper[4823]: I1206 06:51:46.377862 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63d4665c198a93337473b50a483f562aae39daa6db46c7032b4e51acab14731b" Dec 06 06:51:46 crc kubenswrapper[4823]: I1206 06:51:46.488370 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:46 crc kubenswrapper[4823]: I1206 06:51:46.632900 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-ovsdbserver-sb\") pod \"311efc88-9e03-4b89-bf75-c1e64f9873f1\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " Dec 06 06:51:46 crc kubenswrapper[4823]: I1206 06:51:46.633319 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-config\") pod \"311efc88-9e03-4b89-bf75-c1e64f9873f1\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " Dec 06 06:51:46 crc kubenswrapper[4823]: I1206 06:51:46.633387 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-dns-svc\") pod \"311efc88-9e03-4b89-bf75-c1e64f9873f1\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " Dec 06 06:51:46 crc kubenswrapper[4823]: I1206 06:51:46.633451 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-ovsdbserver-nb\") pod \"311efc88-9e03-4b89-bf75-c1e64f9873f1\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " Dec 06 06:51:46 crc kubenswrapper[4823]: I1206 06:51:46.633552 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-openstack-edpm-ipam\") pod \"311efc88-9e03-4b89-bf75-c1e64f9873f1\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " Dec 06 06:51:46 crc kubenswrapper[4823]: I1206 06:51:46.633582 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrhv9\" (UniqueName: \"kubernetes.io/projected/311efc88-9e03-4b89-bf75-c1e64f9873f1-kube-api-access-mrhv9\") pod \"311efc88-9e03-4b89-bf75-c1e64f9873f1\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " Dec 06 06:51:46 crc kubenswrapper[4823]: I1206 06:51:46.633680 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-dns-swift-storage-0\") pod \"311efc88-9e03-4b89-bf75-c1e64f9873f1\" (UID: \"311efc88-9e03-4b89-bf75-c1e64f9873f1\") " Dec 06 06:51:46 crc kubenswrapper[4823]: I1206 06:51:46.650973 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/311efc88-9e03-4b89-bf75-c1e64f9873f1-kube-api-access-mrhv9" (OuterVolumeSpecName: "kube-api-access-mrhv9") pod "311efc88-9e03-4b89-bf75-c1e64f9873f1" (UID: "311efc88-9e03-4b89-bf75-c1e64f9873f1"). InnerVolumeSpecName "kube-api-access-mrhv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:51:46 crc kubenswrapper[4823]: I1206 06:51:46.695256 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "311efc88-9e03-4b89-bf75-c1e64f9873f1" (UID: "311efc88-9e03-4b89-bf75-c1e64f9873f1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:51:46 crc kubenswrapper[4823]: I1206 06:51:46.702384 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-config" (OuterVolumeSpecName: "config") pod "311efc88-9e03-4b89-bf75-c1e64f9873f1" (UID: "311efc88-9e03-4b89-bf75-c1e64f9873f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:51:46 crc kubenswrapper[4823]: I1206 06:51:46.707557 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "311efc88-9e03-4b89-bf75-c1e64f9873f1" (UID: "311efc88-9e03-4b89-bf75-c1e64f9873f1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:51:46 crc kubenswrapper[4823]: I1206 06:51:46.709435 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "311efc88-9e03-4b89-bf75-c1e64f9873f1" (UID: "311efc88-9e03-4b89-bf75-c1e64f9873f1"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:51:46 crc kubenswrapper[4823]: I1206 06:51:46.713042 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "311efc88-9e03-4b89-bf75-c1e64f9873f1" (UID: "311efc88-9e03-4b89-bf75-c1e64f9873f1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:51:46 crc kubenswrapper[4823]: I1206 06:51:46.716338 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "311efc88-9e03-4b89-bf75-c1e64f9873f1" (UID: "311efc88-9e03-4b89-bf75-c1e64f9873f1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:51:46 crc kubenswrapper[4823]: I1206 06:51:46.735871 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:46 crc kubenswrapper[4823]: I1206 06:51:46.735916 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:46 crc kubenswrapper[4823]: I1206 06:51:46.735930 4823 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:46 crc kubenswrapper[4823]: I1206 06:51:46.735946 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrhv9\" (UniqueName: \"kubernetes.io/projected/311efc88-9e03-4b89-bf75-c1e64f9873f1-kube-api-access-mrhv9\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:46 crc kubenswrapper[4823]: I1206 06:51:46.735956 4823 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:46 crc kubenswrapper[4823]: I1206 06:51:46.735964 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:46 crc kubenswrapper[4823]: I1206 06:51:46.735972 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/311efc88-9e03-4b89-bf75-c1e64f9873f1-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:51:47 crc kubenswrapper[4823]: I1206 06:51:47.387385 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f56474995-wbdpn" Dec 06 06:51:47 crc kubenswrapper[4823]: I1206 06:51:47.412975 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f56474995-wbdpn"] Dec 06 06:51:47 crc kubenswrapper[4823]: I1206 06:51:47.423481 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f56474995-wbdpn"] Dec 06 06:51:48 crc kubenswrapper[4823]: I1206 06:51:48.141455 4823 scope.go:117] "RemoveContainer" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" Dec 06 06:51:48 crc kubenswrapper[4823]: E1206 06:51:48.142022 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 06:51:49 crc kubenswrapper[4823]: I1206 06:51:49.155460 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="311efc88-9e03-4b89-bf75-c1e64f9873f1" path="/var/lib/kubelet/pods/311efc88-9e03-4b89-bf75-c1e64f9873f1/volumes" Dec 06 06:51:51 crc kubenswrapper[4823]: I1206 06:51:51.601475 4823 generic.go:334] "Generic (PLEG): container finished" podID="98d25d92-00b6-4897-b3df-0976c9c3a8eb" containerID="4e7c6a9fa8c1ee338eecf239c1679d87da7eca8d0d63126c67401d46e854a042" exitCode=0 Dec 06 06:51:51 crc kubenswrapper[4823]: I1206 06:51:51.601503 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"98d25d92-00b6-4897-b3df-0976c9c3a8eb","Type":"ContainerDied","Data":"4e7c6a9fa8c1ee338eecf239c1679d87da7eca8d0d63126c67401d46e854a042"} Dec 06 06:51:51 crc kubenswrapper[4823]: I1206 06:51:51.604868 4823 generic.go:334] "Generic (PLEG): container finished" podID="1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7" containerID="a71c88c99b21565443b8612e26819972c10024e6dabf73de72d19713101f570d" exitCode=0 Dec 06 06:51:51 crc kubenswrapper[4823]: I1206 06:51:51.604900 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7","Type":"ContainerDied","Data":"a71c88c99b21565443b8612e26819972c10024e6dabf73de72d19713101f570d"} Dec 06 06:51:52 crc kubenswrapper[4823]: I1206 06:51:52.618957 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7","Type":"ContainerStarted","Data":"e28f1931c13eae865f1325b68b365400fe6a5ac6b50d320d56728ad1829bd6e2"} Dec 06 06:51:52 crc kubenswrapper[4823]: I1206 06:51:52.621382 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 06 06:51:52 crc kubenswrapper[4823]: I1206 06:51:52.630588 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"98d25d92-00b6-4897-b3df-0976c9c3a8eb","Type":"ContainerStarted","Data":"a9a3e448fc7d2301e7ba005e41d5db58ccd6f52186bca60a1a02bcd1243ba824"} Dec 06 06:51:52 crc kubenswrapper[4823]: I1206 06:51:52.632133 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:51:52 crc kubenswrapper[4823]: I1206 06:51:52.667023 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.666993315 podStartE2EDuration="38.666993315s" podCreationTimestamp="2025-12-06 06:51:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:51:52.661441914 +0000 UTC m=+1613.947193874" watchObservedRunningTime="2025-12-06 06:51:52.666993315 +0000 UTC m=+1613.952745285" Dec 06 06:51:52 crc kubenswrapper[4823]: I1206 06:51:52.697176 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.697154079 podStartE2EDuration="37.697154079s" podCreationTimestamp="2025-12-06 06:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:51:52.690526257 +0000 UTC m=+1613.976278227" watchObservedRunningTime="2025-12-06 06:51:52.697154079 +0000 UTC m=+1613.982906039" Dec 06 06:52:00 crc kubenswrapper[4823]: I1206 06:52:00.141447 4823 scope.go:117] "RemoveContainer" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" Dec 06 06:52:00 crc kubenswrapper[4823]: E1206 06:52:00.142266 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 06:52:04 crc kubenswrapper[4823]: I1206 06:52:04.358040 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-575pg"] Dec 06 06:52:04 crc kubenswrapper[4823]: E1206 06:52:04.358946 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="311efc88-9e03-4b89-bf75-c1e64f9873f1" containerName="init" Dec 06 06:52:04 crc kubenswrapper[4823]: I1206 06:52:04.358966 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="311efc88-9e03-4b89-bf75-c1e64f9873f1" containerName="init" Dec 06 06:52:04 crc kubenswrapper[4823]: E1206 06:52:04.358980 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5160c40-0e83-445f-bf12-b4530306aaaf" containerName="dnsmasq-dns" Dec 06 06:52:04 crc kubenswrapper[4823]: I1206 06:52:04.358987 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5160c40-0e83-445f-bf12-b4530306aaaf" containerName="dnsmasq-dns" Dec 06 06:52:04 crc kubenswrapper[4823]: E1206 06:52:04.359017 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5160c40-0e83-445f-bf12-b4530306aaaf" containerName="init" Dec 06 06:52:04 crc kubenswrapper[4823]: I1206 06:52:04.359026 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5160c40-0e83-445f-bf12-b4530306aaaf" containerName="init" Dec 06 06:52:04 crc kubenswrapper[4823]: E1206 06:52:04.359039 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="311efc88-9e03-4b89-bf75-c1e64f9873f1" containerName="dnsmasq-dns" Dec 06 06:52:04 crc kubenswrapper[4823]: I1206 06:52:04.359049 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="311efc88-9e03-4b89-bf75-c1e64f9873f1" containerName="dnsmasq-dns" Dec 06 06:52:04 crc kubenswrapper[4823]: I1206 06:52:04.359870 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="311efc88-9e03-4b89-bf75-c1e64f9873f1" containerName="dnsmasq-dns" Dec 06 06:52:04 crc kubenswrapper[4823]: I1206 06:52:04.359895 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5160c40-0e83-445f-bf12-b4530306aaaf" containerName="dnsmasq-dns" Dec 06 06:52:04 crc kubenswrapper[4823]: I1206 06:52:04.360724 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-575pg" Dec 06 06:52:04 crc kubenswrapper[4823]: I1206 06:52:04.369025 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 06:52:04 crc kubenswrapper[4823]: I1206 06:52:04.370030 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 06:52:04 crc kubenswrapper[4823]: I1206 06:52:04.370178 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 06:52:04 crc kubenswrapper[4823]: I1206 06:52:04.370279 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xqh9k" Dec 06 06:52:04 crc kubenswrapper[4823]: I1206 06:52:04.404712 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-575pg"] Dec 06 06:52:04 crc kubenswrapper[4823]: I1206 06:52:04.476243 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a037ce0-c728-4523-b34b-9add69b94c18-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-575pg\" (UID: \"8a037ce0-c728-4523-b34b-9add69b94c18\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-575pg" Dec 06 06:52:04 crc kubenswrapper[4823]: I1206 06:52:04.476374 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8a037ce0-c728-4523-b34b-9add69b94c18-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-575pg\" (UID: \"8a037ce0-c728-4523-b34b-9add69b94c18\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-575pg" Dec 06 06:52:04 crc kubenswrapper[4823]: I1206 06:52:04.476410 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kstcf\" (UniqueName: \"kubernetes.io/projected/8a037ce0-c728-4523-b34b-9add69b94c18-kube-api-access-kstcf\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-575pg\" (UID: \"8a037ce0-c728-4523-b34b-9add69b94c18\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-575pg" Dec 06 06:52:04 crc kubenswrapper[4823]: I1206 06:52:04.476587 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a037ce0-c728-4523-b34b-9add69b94c18-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-575pg\" (UID: \"8a037ce0-c728-4523-b34b-9add69b94c18\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-575pg" Dec 06 06:52:04 crc kubenswrapper[4823]: I1206 06:52:04.579911 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a037ce0-c728-4523-b34b-9add69b94c18-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-575pg\" (UID: \"8a037ce0-c728-4523-b34b-9add69b94c18\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-575pg" Dec 06 06:52:04 crc kubenswrapper[4823]: I1206 06:52:04.580038 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8a037ce0-c728-4523-b34b-9add69b94c18-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-575pg\" (UID: \"8a037ce0-c728-4523-b34b-9add69b94c18\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-575pg" Dec 06 06:52:04 crc kubenswrapper[4823]: I1206 06:52:04.580073 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kstcf\" (UniqueName: \"kubernetes.io/projected/8a037ce0-c728-4523-b34b-9add69b94c18-kube-api-access-kstcf\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-575pg\" (UID: \"8a037ce0-c728-4523-b34b-9add69b94c18\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-575pg" Dec 06 06:52:04 crc kubenswrapper[4823]: I1206 06:52:04.580112 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a037ce0-c728-4523-b34b-9add69b94c18-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-575pg\" (UID: \"8a037ce0-c728-4523-b34b-9add69b94c18\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-575pg" Dec 06 06:52:04 crc kubenswrapper[4823]: I1206 06:52:04.588705 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a037ce0-c728-4523-b34b-9add69b94c18-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-575pg\" (UID: \"8a037ce0-c728-4523-b34b-9add69b94c18\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-575pg" Dec 06 06:52:04 crc kubenswrapper[4823]: I1206 06:52:04.589115 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a037ce0-c728-4523-b34b-9add69b94c18-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-575pg\" (UID: \"8a037ce0-c728-4523-b34b-9add69b94c18\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-575pg" Dec 06 06:52:04 crc kubenswrapper[4823]: I1206 06:52:04.589299 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8a037ce0-c728-4523-b34b-9add69b94c18-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-575pg\" (UID: \"8a037ce0-c728-4523-b34b-9add69b94c18\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-575pg" Dec 06 06:52:04 crc kubenswrapper[4823]: I1206 06:52:04.602412 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kstcf\" (UniqueName: \"kubernetes.io/projected/8a037ce0-c728-4523-b34b-9add69b94c18-kube-api-access-kstcf\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-575pg\" (UID: \"8a037ce0-c728-4523-b34b-9add69b94c18\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-575pg" Dec 06 06:52:04 crc kubenswrapper[4823]: I1206 06:52:04.683736 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-575pg" Dec 06 06:52:05 crc kubenswrapper[4823]: I1206 06:52:05.380459 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-575pg"] Dec 06 06:52:05 crc kubenswrapper[4823]: I1206 06:52:05.405099 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 06:52:05 crc kubenswrapper[4823]: I1206 06:52:05.627798 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.224:5671: connect: connection refused" Dec 06 06:52:05 crc kubenswrapper[4823]: I1206 06:52:05.769507 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-575pg" event={"ID":"8a037ce0-c728-4523-b34b-9add69b94c18","Type":"ContainerStarted","Data":"ddd4f906b15c82d838e0ee704f7337710c97aee5a7663e6ed2d9497c3513a48b"} Dec 06 06:52:06 crc kubenswrapper[4823]: I1206 06:52:06.416447 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="98d25d92-00b6-4897-b3df-0976c9c3a8eb" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.225:5671: connect: connection refused" Dec 06 06:52:13 crc kubenswrapper[4823]: I1206 06:52:13.141795 4823 scope.go:117] "RemoveContainer" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" Dec 06 06:52:13 crc kubenswrapper[4823]: E1206 06:52:13.142816 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 06:52:15 crc kubenswrapper[4823]: I1206 06:52:15.651997 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 06 06:52:16 crc kubenswrapper[4823]: I1206 06:52:16.415951 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 06 06:52:17 crc kubenswrapper[4823]: I1206 06:52:17.919213 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-575pg" event={"ID":"8a037ce0-c728-4523-b34b-9add69b94c18","Type":"ContainerStarted","Data":"2f75f1bc670a45c19e6f96d6a5006e6f5a8793fc40611fb52b20291167a7b338"} Dec 06 06:52:17 crc kubenswrapper[4823]: I1206 06:52:17.943155 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-575pg" podStartSLOduration=2.566437573 podStartE2EDuration="13.943128503s" podCreationTimestamp="2025-12-06 06:52:04 +0000 UTC" firstStartedPulling="2025-12-06 06:52:05.404776324 +0000 UTC m=+1626.690528284" lastFinishedPulling="2025-12-06 06:52:16.781467254 +0000 UTC m=+1638.067219214" observedRunningTime="2025-12-06 06:52:17.939320333 +0000 UTC m=+1639.225072293" watchObservedRunningTime="2025-12-06 06:52:17.943128503 +0000 UTC m=+1639.228880463" Dec 06 06:52:18 crc kubenswrapper[4823]: I1206 06:52:18.018861 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2z2qf"] Dec 06 06:52:18 crc kubenswrapper[4823]: I1206 06:52:18.022209 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2z2qf" Dec 06 06:52:18 crc kubenswrapper[4823]: I1206 06:52:18.036685 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2z2qf"] Dec 06 06:52:18 crc kubenswrapper[4823]: I1206 06:52:18.140271 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c10ac7a3-e06a-4302-956d-719205bd5dc3-utilities\") pod \"redhat-marketplace-2z2qf\" (UID: \"c10ac7a3-e06a-4302-956d-719205bd5dc3\") " pod="openshift-marketplace/redhat-marketplace-2z2qf" Dec 06 06:52:18 crc kubenswrapper[4823]: I1206 06:52:18.140544 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq9pm\" (UniqueName: \"kubernetes.io/projected/c10ac7a3-e06a-4302-956d-719205bd5dc3-kube-api-access-mq9pm\") pod \"redhat-marketplace-2z2qf\" (UID: \"c10ac7a3-e06a-4302-956d-719205bd5dc3\") " pod="openshift-marketplace/redhat-marketplace-2z2qf" Dec 06 06:52:18 crc kubenswrapper[4823]: I1206 06:52:18.141355 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c10ac7a3-e06a-4302-956d-719205bd5dc3-catalog-content\") pod \"redhat-marketplace-2z2qf\" (UID: \"c10ac7a3-e06a-4302-956d-719205bd5dc3\") " pod="openshift-marketplace/redhat-marketplace-2z2qf" Dec 06 06:52:18 crc kubenswrapper[4823]: I1206 06:52:18.243639 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq9pm\" (UniqueName: \"kubernetes.io/projected/c10ac7a3-e06a-4302-956d-719205bd5dc3-kube-api-access-mq9pm\") pod \"redhat-marketplace-2z2qf\" (UID: \"c10ac7a3-e06a-4302-956d-719205bd5dc3\") " pod="openshift-marketplace/redhat-marketplace-2z2qf" Dec 06 06:52:18 crc kubenswrapper[4823]: I1206 06:52:18.243808 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c10ac7a3-e06a-4302-956d-719205bd5dc3-catalog-content\") pod \"redhat-marketplace-2z2qf\" (UID: \"c10ac7a3-e06a-4302-956d-719205bd5dc3\") " pod="openshift-marketplace/redhat-marketplace-2z2qf" Dec 06 06:52:18 crc kubenswrapper[4823]: I1206 06:52:18.243938 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c10ac7a3-e06a-4302-956d-719205bd5dc3-utilities\") pod \"redhat-marketplace-2z2qf\" (UID: \"c10ac7a3-e06a-4302-956d-719205bd5dc3\") " pod="openshift-marketplace/redhat-marketplace-2z2qf" Dec 06 06:52:18 crc kubenswrapper[4823]: I1206 06:52:18.244536 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c10ac7a3-e06a-4302-956d-719205bd5dc3-catalog-content\") pod \"redhat-marketplace-2z2qf\" (UID: \"c10ac7a3-e06a-4302-956d-719205bd5dc3\") " pod="openshift-marketplace/redhat-marketplace-2z2qf" Dec 06 06:52:18 crc kubenswrapper[4823]: I1206 06:52:18.246942 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c10ac7a3-e06a-4302-956d-719205bd5dc3-utilities\") pod \"redhat-marketplace-2z2qf\" (UID: \"c10ac7a3-e06a-4302-956d-719205bd5dc3\") " pod="openshift-marketplace/redhat-marketplace-2z2qf" Dec 06 06:52:18 crc kubenswrapper[4823]: I1206 06:52:18.286114 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq9pm\" (UniqueName: \"kubernetes.io/projected/c10ac7a3-e06a-4302-956d-719205bd5dc3-kube-api-access-mq9pm\") pod \"redhat-marketplace-2z2qf\" (UID: \"c10ac7a3-e06a-4302-956d-719205bd5dc3\") " pod="openshift-marketplace/redhat-marketplace-2z2qf" Dec 06 06:52:18 crc kubenswrapper[4823]: I1206 06:52:18.341561 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2z2qf" Dec 06 06:52:18 crc kubenswrapper[4823]: W1206 06:52:18.845062 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc10ac7a3_e06a_4302_956d_719205bd5dc3.slice/crio-571d32f64cff235fbcd7b6cf672a4daa93965d2ac7ec26326b17b03f81e04d79 WatchSource:0}: Error finding container 571d32f64cff235fbcd7b6cf672a4daa93965d2ac7ec26326b17b03f81e04d79: Status 404 returned error can't find the container with id 571d32f64cff235fbcd7b6cf672a4daa93965d2ac7ec26326b17b03f81e04d79 Dec 06 06:52:18 crc kubenswrapper[4823]: I1206 06:52:18.845976 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2z2qf"] Dec 06 06:52:18 crc kubenswrapper[4823]: I1206 06:52:18.935027 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2z2qf" event={"ID":"c10ac7a3-e06a-4302-956d-719205bd5dc3","Type":"ContainerStarted","Data":"571d32f64cff235fbcd7b6cf672a4daa93965d2ac7ec26326b17b03f81e04d79"} Dec 06 06:52:19 crc kubenswrapper[4823]: I1206 06:52:19.948881 4823 generic.go:334] "Generic (PLEG): container finished" podID="c10ac7a3-e06a-4302-956d-719205bd5dc3" containerID="8f53240049eaaabebc8226c79bf3d4c50a40e4271ff98a9ce70f6e1d0dd5303e" exitCode=0 Dec 06 06:52:19 crc kubenswrapper[4823]: I1206 06:52:19.949025 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2z2qf" event={"ID":"c10ac7a3-e06a-4302-956d-719205bd5dc3","Type":"ContainerDied","Data":"8f53240049eaaabebc8226c79bf3d4c50a40e4271ff98a9ce70f6e1d0dd5303e"} Dec 06 06:52:20 crc kubenswrapper[4823]: I1206 06:52:20.964808 4823 generic.go:334] "Generic (PLEG): container finished" podID="c10ac7a3-e06a-4302-956d-719205bd5dc3" containerID="217ada5516eb276d06a9bcb6d13b7d1936e3376f534a7203ab4a3a071bcb9d8c" exitCode=0 Dec 06 06:52:20 crc kubenswrapper[4823]: I1206 06:52:20.964890 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2z2qf" event={"ID":"c10ac7a3-e06a-4302-956d-719205bd5dc3","Type":"ContainerDied","Data":"217ada5516eb276d06a9bcb6d13b7d1936e3376f534a7203ab4a3a071bcb9d8c"} Dec 06 06:52:21 crc kubenswrapper[4823]: I1206 06:52:21.980481 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2z2qf" event={"ID":"c10ac7a3-e06a-4302-956d-719205bd5dc3","Type":"ContainerStarted","Data":"3aaab552449fa7d31e491fa44e65783f891b075841218db56859e4662ac9fbc9"} Dec 06 06:52:22 crc kubenswrapper[4823]: I1206 06:52:22.003629 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2z2qf" podStartSLOduration=3.604578377 podStartE2EDuration="5.003601683s" podCreationTimestamp="2025-12-06 06:52:17 +0000 UTC" firstStartedPulling="2025-12-06 06:52:19.951197615 +0000 UTC m=+1641.236949575" lastFinishedPulling="2025-12-06 06:52:21.350220931 +0000 UTC m=+1642.635972881" observedRunningTime="2025-12-06 06:52:22.000203614 +0000 UTC m=+1643.285955584" watchObservedRunningTime="2025-12-06 06:52:22.003601683 +0000 UTC m=+1643.289353633" Dec 06 06:52:22 crc kubenswrapper[4823]: I1206 06:52:22.665394 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sdqmb"] Dec 06 06:52:22 crc kubenswrapper[4823]: I1206 06:52:22.668641 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdqmb" Dec 06 06:52:22 crc kubenswrapper[4823]: I1206 06:52:22.682618 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sdqmb"] Dec 06 06:52:22 crc kubenswrapper[4823]: I1206 06:52:22.857625 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d065c76-d379-4549-a3ad-a624878305c6-utilities\") pod \"community-operators-sdqmb\" (UID: \"3d065c76-d379-4549-a3ad-a624878305c6\") " pod="openshift-marketplace/community-operators-sdqmb" Dec 06 06:52:22 crc kubenswrapper[4823]: I1206 06:52:22.858079 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdlkt\" (UniqueName: \"kubernetes.io/projected/3d065c76-d379-4549-a3ad-a624878305c6-kube-api-access-gdlkt\") pod \"community-operators-sdqmb\" (UID: \"3d065c76-d379-4549-a3ad-a624878305c6\") " pod="openshift-marketplace/community-operators-sdqmb" Dec 06 06:52:22 crc kubenswrapper[4823]: I1206 06:52:22.866739 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d065c76-d379-4549-a3ad-a624878305c6-catalog-content\") pod \"community-operators-sdqmb\" (UID: \"3d065c76-d379-4549-a3ad-a624878305c6\") " pod="openshift-marketplace/community-operators-sdqmb" Dec 06 06:52:22 crc kubenswrapper[4823]: I1206 06:52:22.968363 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d065c76-d379-4549-a3ad-a624878305c6-utilities\") pod \"community-operators-sdqmb\" (UID: \"3d065c76-d379-4549-a3ad-a624878305c6\") " pod="openshift-marketplace/community-operators-sdqmb" Dec 06 06:52:22 crc kubenswrapper[4823]: I1206 06:52:22.968517 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdlkt\" (UniqueName: \"kubernetes.io/projected/3d065c76-d379-4549-a3ad-a624878305c6-kube-api-access-gdlkt\") pod \"community-operators-sdqmb\" (UID: \"3d065c76-d379-4549-a3ad-a624878305c6\") " pod="openshift-marketplace/community-operators-sdqmb" Dec 06 06:52:22 crc kubenswrapper[4823]: I1206 06:52:22.968599 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d065c76-d379-4549-a3ad-a624878305c6-catalog-content\") pod \"community-operators-sdqmb\" (UID: \"3d065c76-d379-4549-a3ad-a624878305c6\") " pod="openshift-marketplace/community-operators-sdqmb" Dec 06 06:52:22 crc kubenswrapper[4823]: I1206 06:52:22.969344 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d065c76-d379-4549-a3ad-a624878305c6-catalog-content\") pod \"community-operators-sdqmb\" (UID: \"3d065c76-d379-4549-a3ad-a624878305c6\") " pod="openshift-marketplace/community-operators-sdqmb" Dec 06 06:52:22 crc kubenswrapper[4823]: I1206 06:52:22.969680 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d065c76-d379-4549-a3ad-a624878305c6-utilities\") pod \"community-operators-sdqmb\" (UID: \"3d065c76-d379-4549-a3ad-a624878305c6\") " pod="openshift-marketplace/community-operators-sdqmb" Dec 06 06:52:23 crc kubenswrapper[4823]: I1206 06:52:23.015189 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdlkt\" (UniqueName: \"kubernetes.io/projected/3d065c76-d379-4549-a3ad-a624878305c6-kube-api-access-gdlkt\") pod \"community-operators-sdqmb\" (UID: \"3d065c76-d379-4549-a3ad-a624878305c6\") " pod="openshift-marketplace/community-operators-sdqmb" Dec 06 06:52:23 crc kubenswrapper[4823]: I1206 06:52:23.175451 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdqmb" Dec 06 06:52:23 crc kubenswrapper[4823]: I1206 06:52:23.819053 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sdqmb"] Dec 06 06:52:24 crc kubenswrapper[4823]: I1206 06:52:24.011276 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdqmb" event={"ID":"3d065c76-d379-4549-a3ad-a624878305c6","Type":"ContainerStarted","Data":"39bb3d1e36c708ab27f145cadfcca5a2ea486b9f6838b962f5ed81760c8bf5a2"} Dec 06 06:52:25 crc kubenswrapper[4823]: I1206 06:52:25.023380 4823 generic.go:334] "Generic (PLEG): container finished" podID="3d065c76-d379-4549-a3ad-a624878305c6" containerID="82087bb584bdf73fe28db2c3fb4886381f4b125de0fcc158a2a08a43f109b980" exitCode=0 Dec 06 06:52:25 crc kubenswrapper[4823]: I1206 06:52:25.023458 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdqmb" event={"ID":"3d065c76-d379-4549-a3ad-a624878305c6","Type":"ContainerDied","Data":"82087bb584bdf73fe28db2c3fb4886381f4b125de0fcc158a2a08a43f109b980"} Dec 06 06:52:27 crc kubenswrapper[4823]: I1206 06:52:27.046651 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdqmb" event={"ID":"3d065c76-d379-4549-a3ad-a624878305c6","Type":"ContainerStarted","Data":"ba3825e0180e96544d96d5429c2267d39ade11d956946fa4bdd168cec2ad844f"} Dec 06 06:52:28 crc kubenswrapper[4823]: I1206 06:52:28.071777 4823 generic.go:334] "Generic (PLEG): container finished" podID="3d065c76-d379-4549-a3ad-a624878305c6" containerID="ba3825e0180e96544d96d5429c2267d39ade11d956946fa4bdd168cec2ad844f" exitCode=0 Dec 06 06:52:28 crc kubenswrapper[4823]: I1206 06:52:28.072102 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdqmb" event={"ID":"3d065c76-d379-4549-a3ad-a624878305c6","Type":"ContainerDied","Data":"ba3825e0180e96544d96d5429c2267d39ade11d956946fa4bdd168cec2ad844f"} Dec 06 06:52:28 crc kubenswrapper[4823]: I1206 06:52:28.141049 4823 scope.go:117] "RemoveContainer" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" Dec 06 06:52:28 crc kubenswrapper[4823]: E1206 06:52:28.141340 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 06:52:28 crc kubenswrapper[4823]: I1206 06:52:28.342195 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2z2qf" Dec 06 06:52:28 crc kubenswrapper[4823]: I1206 06:52:28.342285 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2z2qf" Dec 06 06:52:28 crc kubenswrapper[4823]: I1206 06:52:28.401187 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2z2qf" Dec 06 06:52:29 crc kubenswrapper[4823]: I1206 06:52:29.089796 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdqmb" event={"ID":"3d065c76-d379-4549-a3ad-a624878305c6","Type":"ContainerStarted","Data":"df6c75063c832ab8e09c822989d7a7816081f5c99cebfa4641f1bbfeb9a87780"} Dec 06 06:52:29 crc kubenswrapper[4823]: I1206 06:52:29.111113 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sdqmb" podStartSLOduration=3.4716883579999998 podStartE2EDuration="7.111083677s" podCreationTimestamp="2025-12-06 06:52:22 +0000 UTC" firstStartedPulling="2025-12-06 06:52:25.026981213 +0000 UTC m=+1646.312733173" lastFinishedPulling="2025-12-06 06:52:28.666376532 +0000 UTC m=+1649.952128492" observedRunningTime="2025-12-06 06:52:29.109400128 +0000 UTC m=+1650.395152088" watchObservedRunningTime="2025-12-06 06:52:29.111083677 +0000 UTC m=+1650.396835637" Dec 06 06:52:29 crc kubenswrapper[4823]: I1206 06:52:29.160001 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2z2qf" Dec 06 06:52:30 crc kubenswrapper[4823]: I1206 06:52:30.127737 4823 generic.go:334] "Generic (PLEG): container finished" podID="8a037ce0-c728-4523-b34b-9add69b94c18" containerID="2f75f1bc670a45c19e6f96d6a5006e6f5a8793fc40611fb52b20291167a7b338" exitCode=0 Dec 06 06:52:30 crc kubenswrapper[4823]: I1206 06:52:30.127843 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-575pg" event={"ID":"8a037ce0-c728-4523-b34b-9add69b94c18","Type":"ContainerDied","Data":"2f75f1bc670a45c19e6f96d6a5006e6f5a8793fc40611fb52b20291167a7b338"} Dec 06 06:52:30 crc kubenswrapper[4823]: I1206 06:52:30.747500 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2z2qf"] Dec 06 06:52:31 crc kubenswrapper[4823]: I1206 06:52:31.138614 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2z2qf" podUID="c10ac7a3-e06a-4302-956d-719205bd5dc3" containerName="registry-server" containerID="cri-o://3aaab552449fa7d31e491fa44e65783f891b075841218db56859e4662ac9fbc9" gracePeriod=2 Dec 06 06:52:31 crc kubenswrapper[4823]: I1206 06:52:31.730208 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-575pg" Dec 06 06:52:31 crc kubenswrapper[4823]: I1206 06:52:31.742609 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2z2qf" Dec 06 06:52:31 crc kubenswrapper[4823]: I1206 06:52:31.794332 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a037ce0-c728-4523-b34b-9add69b94c18-inventory\") pod \"8a037ce0-c728-4523-b34b-9add69b94c18\" (UID: \"8a037ce0-c728-4523-b34b-9add69b94c18\") " Dec 06 06:52:31 crc kubenswrapper[4823]: I1206 06:52:31.794899 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq9pm\" (UniqueName: \"kubernetes.io/projected/c10ac7a3-e06a-4302-956d-719205bd5dc3-kube-api-access-mq9pm\") pod \"c10ac7a3-e06a-4302-956d-719205bd5dc3\" (UID: \"c10ac7a3-e06a-4302-956d-719205bd5dc3\") " Dec 06 06:52:31 crc kubenswrapper[4823]: I1206 06:52:31.795021 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c10ac7a3-e06a-4302-956d-719205bd5dc3-catalog-content\") pod \"c10ac7a3-e06a-4302-956d-719205bd5dc3\" (UID: \"c10ac7a3-e06a-4302-956d-719205bd5dc3\") " Dec 06 06:52:31 crc kubenswrapper[4823]: I1206 06:52:31.795165 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a037ce0-c728-4523-b34b-9add69b94c18-repo-setup-combined-ca-bundle\") pod \"8a037ce0-c728-4523-b34b-9add69b94c18\" (UID: \"8a037ce0-c728-4523-b34b-9add69b94c18\") " Dec 06 06:52:31 crc kubenswrapper[4823]: I1206 06:52:31.795200 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8a037ce0-c728-4523-b34b-9add69b94c18-ssh-key\") pod \"8a037ce0-c728-4523-b34b-9add69b94c18\" (UID: \"8a037ce0-c728-4523-b34b-9add69b94c18\") " Dec 06 06:52:31 crc kubenswrapper[4823]: I1206 06:52:31.795225 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c10ac7a3-e06a-4302-956d-719205bd5dc3-utilities\") pod \"c10ac7a3-e06a-4302-956d-719205bd5dc3\" (UID: \"c10ac7a3-e06a-4302-956d-719205bd5dc3\") " Dec 06 06:52:31 crc kubenswrapper[4823]: I1206 06:52:31.795259 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kstcf\" (UniqueName: \"kubernetes.io/projected/8a037ce0-c728-4523-b34b-9add69b94c18-kube-api-access-kstcf\") pod \"8a037ce0-c728-4523-b34b-9add69b94c18\" (UID: \"8a037ce0-c728-4523-b34b-9add69b94c18\") " Dec 06 06:52:31 crc kubenswrapper[4823]: I1206 06:52:31.797864 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c10ac7a3-e06a-4302-956d-719205bd5dc3-utilities" (OuterVolumeSpecName: "utilities") pod "c10ac7a3-e06a-4302-956d-719205bd5dc3" (UID: "c10ac7a3-e06a-4302-956d-719205bd5dc3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:52:31 crc kubenswrapper[4823]: I1206 06:52:31.805735 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a037ce0-c728-4523-b34b-9add69b94c18-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "8a037ce0-c728-4523-b34b-9add69b94c18" (UID: "8a037ce0-c728-4523-b34b-9add69b94c18"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:52:31 crc kubenswrapper[4823]: I1206 06:52:31.806888 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a037ce0-c728-4523-b34b-9add69b94c18-kube-api-access-kstcf" (OuterVolumeSpecName: "kube-api-access-kstcf") pod "8a037ce0-c728-4523-b34b-9add69b94c18" (UID: "8a037ce0-c728-4523-b34b-9add69b94c18"). InnerVolumeSpecName "kube-api-access-kstcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:52:31 crc kubenswrapper[4823]: I1206 06:52:31.808219 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c10ac7a3-e06a-4302-956d-719205bd5dc3-kube-api-access-mq9pm" (OuterVolumeSpecName: "kube-api-access-mq9pm") pod "c10ac7a3-e06a-4302-956d-719205bd5dc3" (UID: "c10ac7a3-e06a-4302-956d-719205bd5dc3"). InnerVolumeSpecName "kube-api-access-mq9pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:52:31 crc kubenswrapper[4823]: I1206 06:52:31.829537 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c10ac7a3-e06a-4302-956d-719205bd5dc3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c10ac7a3-e06a-4302-956d-719205bd5dc3" (UID: "c10ac7a3-e06a-4302-956d-719205bd5dc3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:52:31 crc kubenswrapper[4823]: I1206 06:52:31.835971 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a037ce0-c728-4523-b34b-9add69b94c18-inventory" (OuterVolumeSpecName: "inventory") pod "8a037ce0-c728-4523-b34b-9add69b94c18" (UID: "8a037ce0-c728-4523-b34b-9add69b94c18"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:52:31 crc kubenswrapper[4823]: I1206 06:52:31.853090 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a037ce0-c728-4523-b34b-9add69b94c18-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8a037ce0-c728-4523-b34b-9add69b94c18" (UID: "8a037ce0-c728-4523-b34b-9add69b94c18"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:52:31 crc kubenswrapper[4823]: I1206 06:52:31.898918 4823 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a037ce0-c728-4523-b34b-9add69b94c18-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:52:31 crc kubenswrapper[4823]: I1206 06:52:31.898960 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8a037ce0-c728-4523-b34b-9add69b94c18-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:52:31 crc kubenswrapper[4823]: I1206 06:52:31.898974 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c10ac7a3-e06a-4302-956d-719205bd5dc3-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:52:31 crc kubenswrapper[4823]: I1206 06:52:31.898990 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kstcf\" (UniqueName: \"kubernetes.io/projected/8a037ce0-c728-4523-b34b-9add69b94c18-kube-api-access-kstcf\") on node \"crc\" DevicePath \"\"" Dec 06 06:52:31 crc kubenswrapper[4823]: I1206 06:52:31.899006 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a037ce0-c728-4523-b34b-9add69b94c18-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 06:52:31 crc kubenswrapper[4823]: I1206 06:52:31.899019 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq9pm\" (UniqueName: \"kubernetes.io/projected/c10ac7a3-e06a-4302-956d-719205bd5dc3-kube-api-access-mq9pm\") on node \"crc\" DevicePath \"\"" Dec 06 06:52:31 crc kubenswrapper[4823]: I1206 06:52:31.899030 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c10ac7a3-e06a-4302-956d-719205bd5dc3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.154523 4823 generic.go:334] "Generic (PLEG): container finished" podID="c10ac7a3-e06a-4302-956d-719205bd5dc3" containerID="3aaab552449fa7d31e491fa44e65783f891b075841218db56859e4662ac9fbc9" exitCode=0 Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.154611 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2z2qf" event={"ID":"c10ac7a3-e06a-4302-956d-719205bd5dc3","Type":"ContainerDied","Data":"3aaab552449fa7d31e491fa44e65783f891b075841218db56859e4662ac9fbc9"} Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.154744 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2z2qf" event={"ID":"c10ac7a3-e06a-4302-956d-719205bd5dc3","Type":"ContainerDied","Data":"571d32f64cff235fbcd7b6cf672a4daa93965d2ac7ec26326b17b03f81e04d79"} Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.154774 4823 scope.go:117] "RemoveContainer" containerID="3aaab552449fa7d31e491fa44e65783f891b075841218db56859e4662ac9fbc9" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.156208 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2z2qf" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.157104 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-575pg" event={"ID":"8a037ce0-c728-4523-b34b-9add69b94c18","Type":"ContainerDied","Data":"ddd4f906b15c82d838e0ee704f7337710c97aee5a7663e6ed2d9497c3513a48b"} Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.157148 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddd4f906b15c82d838e0ee704f7337710c97aee5a7663e6ed2d9497c3513a48b" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.157231 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-575pg" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.195363 4823 scope.go:117] "RemoveContainer" containerID="217ada5516eb276d06a9bcb6d13b7d1936e3376f534a7203ab4a3a071bcb9d8c" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.216882 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2z2qf"] Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.222301 4823 scope.go:117] "RemoveContainer" containerID="8f53240049eaaabebc8226c79bf3d4c50a40e4271ff98a9ce70f6e1d0dd5303e" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.229081 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2z2qf"] Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.242759 4823 scope.go:117] "RemoveContainer" containerID="3aaab552449fa7d31e491fa44e65783f891b075841218db56859e4662ac9fbc9" Dec 06 06:52:32 crc kubenswrapper[4823]: E1206 06:52:32.243332 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3aaab552449fa7d31e491fa44e65783f891b075841218db56859e4662ac9fbc9\": container with ID starting with 3aaab552449fa7d31e491fa44e65783f891b075841218db56859e4662ac9fbc9 not found: ID does not exist" containerID="3aaab552449fa7d31e491fa44e65783f891b075841218db56859e4662ac9fbc9" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.243365 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3aaab552449fa7d31e491fa44e65783f891b075841218db56859e4662ac9fbc9"} err="failed to get container status \"3aaab552449fa7d31e491fa44e65783f891b075841218db56859e4662ac9fbc9\": rpc error: code = NotFound desc = could not find container \"3aaab552449fa7d31e491fa44e65783f891b075841218db56859e4662ac9fbc9\": container with ID starting with 3aaab552449fa7d31e491fa44e65783f891b075841218db56859e4662ac9fbc9 not found: ID does not exist" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.243389 4823 scope.go:117] "RemoveContainer" containerID="217ada5516eb276d06a9bcb6d13b7d1936e3376f534a7203ab4a3a071bcb9d8c" Dec 06 06:52:32 crc kubenswrapper[4823]: E1206 06:52:32.243680 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"217ada5516eb276d06a9bcb6d13b7d1936e3376f534a7203ab4a3a071bcb9d8c\": container with ID starting with 217ada5516eb276d06a9bcb6d13b7d1936e3376f534a7203ab4a3a071bcb9d8c not found: ID does not exist" containerID="217ada5516eb276d06a9bcb6d13b7d1936e3376f534a7203ab4a3a071bcb9d8c" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.243716 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"217ada5516eb276d06a9bcb6d13b7d1936e3376f534a7203ab4a3a071bcb9d8c"} err="failed to get container status \"217ada5516eb276d06a9bcb6d13b7d1936e3376f534a7203ab4a3a071bcb9d8c\": rpc error: code = NotFound desc = could not find container \"217ada5516eb276d06a9bcb6d13b7d1936e3376f534a7203ab4a3a071bcb9d8c\": container with ID starting with 217ada5516eb276d06a9bcb6d13b7d1936e3376f534a7203ab4a3a071bcb9d8c not found: ID does not exist" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.243739 4823 scope.go:117] "RemoveContainer" containerID="8f53240049eaaabebc8226c79bf3d4c50a40e4271ff98a9ce70f6e1d0dd5303e" Dec 06 06:52:32 crc kubenswrapper[4823]: E1206 06:52:32.243956 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f53240049eaaabebc8226c79bf3d4c50a40e4271ff98a9ce70f6e1d0dd5303e\": container with ID starting with 8f53240049eaaabebc8226c79bf3d4c50a40e4271ff98a9ce70f6e1d0dd5303e not found: ID does not exist" containerID="8f53240049eaaabebc8226c79bf3d4c50a40e4271ff98a9ce70f6e1d0dd5303e" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.243977 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f53240049eaaabebc8226c79bf3d4c50a40e4271ff98a9ce70f6e1d0dd5303e"} err="failed to get container status \"8f53240049eaaabebc8226c79bf3d4c50a40e4271ff98a9ce70f6e1d0dd5303e\": rpc error: code = NotFound desc = could not find container \"8f53240049eaaabebc8226c79bf3d4c50a40e4271ff98a9ce70f6e1d0dd5303e\": container with ID starting with 8f53240049eaaabebc8226c79bf3d4c50a40e4271ff98a9ce70f6e1d0dd5303e not found: ID does not exist" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.285778 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-r428q"] Dec 06 06:52:32 crc kubenswrapper[4823]: E1206 06:52:32.286498 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10ac7a3-e06a-4302-956d-719205bd5dc3" containerName="registry-server" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.286601 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10ac7a3-e06a-4302-956d-719205bd5dc3" containerName="registry-server" Dec 06 06:52:32 crc kubenswrapper[4823]: E1206 06:52:32.286731 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10ac7a3-e06a-4302-956d-719205bd5dc3" containerName="extract-content" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.286826 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10ac7a3-e06a-4302-956d-719205bd5dc3" containerName="extract-content" Dec 06 06:52:32 crc kubenswrapper[4823]: E1206 06:52:32.286921 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10ac7a3-e06a-4302-956d-719205bd5dc3" containerName="extract-utilities" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.286990 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10ac7a3-e06a-4302-956d-719205bd5dc3" containerName="extract-utilities" Dec 06 06:52:32 crc kubenswrapper[4823]: E1206 06:52:32.287071 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a037ce0-c728-4523-b34b-9add69b94c18" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.287142 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a037ce0-c728-4523-b34b-9add69b94c18" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.287495 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c10ac7a3-e06a-4302-956d-719205bd5dc3" containerName="registry-server" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.287599 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a037ce0-c728-4523-b34b-9add69b94c18" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.288685 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r428q" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.295211 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.295541 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xqh9k" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.295802 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.295849 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.317169 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-r428q"] Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.327102 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r428q\" (UID: \"cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r428q" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.327346 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r428q\" (UID: \"cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r428q" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.327535 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n946x\" (UniqueName: \"kubernetes.io/projected/cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488-kube-api-access-n946x\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r428q\" (UID: \"cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r428q" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.430247 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r428q\" (UID: \"cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r428q" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.430334 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n946x\" (UniqueName: \"kubernetes.io/projected/cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488-kube-api-access-n946x\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r428q\" (UID: \"cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r428q" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.430421 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r428q\" (UID: \"cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r428q" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.436213 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r428q\" (UID: \"cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r428q" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.443527 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r428q\" (UID: \"cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r428q" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.448555 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n946x\" (UniqueName: \"kubernetes.io/projected/cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488-kube-api-access-n946x\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r428q\" (UID: \"cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r428q" Dec 06 06:52:32 crc kubenswrapper[4823]: I1206 06:52:32.636159 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r428q" Dec 06 06:52:33 crc kubenswrapper[4823]: I1206 06:52:33.154851 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c10ac7a3-e06a-4302-956d-719205bd5dc3" path="/var/lib/kubelet/pods/c10ac7a3-e06a-4302-956d-719205bd5dc3/volumes" Dec 06 06:52:33 crc kubenswrapper[4823]: I1206 06:52:33.178842 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sdqmb" Dec 06 06:52:33 crc kubenswrapper[4823]: I1206 06:52:33.178905 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sdqmb" Dec 06 06:52:33 crc kubenswrapper[4823]: I1206 06:52:33.239440 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sdqmb" Dec 06 06:52:33 crc kubenswrapper[4823]: I1206 06:52:33.269236 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-r428q"] Dec 06 06:52:34 crc kubenswrapper[4823]: I1206 06:52:34.184817 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r428q" event={"ID":"cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488","Type":"ContainerStarted","Data":"b4706864bbbe6d8c178e012caf585a6d646511289a5db675eb7df390b9d23963"} Dec 06 06:52:34 crc kubenswrapper[4823]: I1206 06:52:34.187388 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r428q" event={"ID":"cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488","Type":"ContainerStarted","Data":"a00fd7c6b3e8d63bb9ef54fc70a362575a307c539eeeb5eb7ebd4222b7c84563"} Dec 06 06:52:34 crc kubenswrapper[4823]: I1206 06:52:34.201281 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r428q" podStartSLOduration=1.697470374 podStartE2EDuration="2.201248791s" podCreationTimestamp="2025-12-06 06:52:32 +0000 UTC" firstStartedPulling="2025-12-06 06:52:33.28144128 +0000 UTC m=+1654.567193240" lastFinishedPulling="2025-12-06 06:52:33.785219697 +0000 UTC m=+1655.070971657" observedRunningTime="2025-12-06 06:52:34.199361117 +0000 UTC m=+1655.485113077" watchObservedRunningTime="2025-12-06 06:52:34.201248791 +0000 UTC m=+1655.487000751" Dec 06 06:52:34 crc kubenswrapper[4823]: I1206 06:52:34.240038 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sdqmb" Dec 06 06:52:34 crc kubenswrapper[4823]: I1206 06:52:34.748334 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sdqmb"] Dec 06 06:52:36 crc kubenswrapper[4823]: I1206 06:52:36.206689 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sdqmb" podUID="3d065c76-d379-4549-a3ad-a624878305c6" containerName="registry-server" containerID="cri-o://df6c75063c832ab8e09c822989d7a7816081f5c99cebfa4641f1bbfeb9a87780" gracePeriod=2 Dec 06 06:52:36 crc kubenswrapper[4823]: I1206 06:52:36.707847 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdqmb" Dec 06 06:52:36 crc kubenswrapper[4823]: I1206 06:52:36.728155 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d065c76-d379-4549-a3ad-a624878305c6-catalog-content\") pod \"3d065c76-d379-4549-a3ad-a624878305c6\" (UID: \"3d065c76-d379-4549-a3ad-a624878305c6\") " Dec 06 06:52:36 crc kubenswrapper[4823]: I1206 06:52:36.728589 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdlkt\" (UniqueName: \"kubernetes.io/projected/3d065c76-d379-4549-a3ad-a624878305c6-kube-api-access-gdlkt\") pod \"3d065c76-d379-4549-a3ad-a624878305c6\" (UID: \"3d065c76-d379-4549-a3ad-a624878305c6\") " Dec 06 06:52:36 crc kubenswrapper[4823]: I1206 06:52:36.729004 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d065c76-d379-4549-a3ad-a624878305c6-utilities\") pod \"3d065c76-d379-4549-a3ad-a624878305c6\" (UID: \"3d065c76-d379-4549-a3ad-a624878305c6\") " Dec 06 06:52:36 crc kubenswrapper[4823]: I1206 06:52:36.729880 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d065c76-d379-4549-a3ad-a624878305c6-utilities" (OuterVolumeSpecName: "utilities") pod "3d065c76-d379-4549-a3ad-a624878305c6" (UID: "3d065c76-d379-4549-a3ad-a624878305c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:52:36 crc kubenswrapper[4823]: I1206 06:52:36.730249 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d065c76-d379-4549-a3ad-a624878305c6-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:52:36 crc kubenswrapper[4823]: I1206 06:52:36.734936 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d065c76-d379-4549-a3ad-a624878305c6-kube-api-access-gdlkt" (OuterVolumeSpecName: "kube-api-access-gdlkt") pod "3d065c76-d379-4549-a3ad-a624878305c6" (UID: "3d065c76-d379-4549-a3ad-a624878305c6"). InnerVolumeSpecName "kube-api-access-gdlkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:52:36 crc kubenswrapper[4823]: I1206 06:52:36.791272 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d065c76-d379-4549-a3ad-a624878305c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d065c76-d379-4549-a3ad-a624878305c6" (UID: "3d065c76-d379-4549-a3ad-a624878305c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:52:36 crc kubenswrapper[4823]: I1206 06:52:36.832389 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d065c76-d379-4549-a3ad-a624878305c6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:52:36 crc kubenswrapper[4823]: I1206 06:52:36.832441 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdlkt\" (UniqueName: \"kubernetes.io/projected/3d065c76-d379-4549-a3ad-a624878305c6-kube-api-access-gdlkt\") on node \"crc\" DevicePath \"\"" Dec 06 06:52:37 crc kubenswrapper[4823]: I1206 06:52:37.220152 4823 generic.go:334] "Generic (PLEG): container finished" podID="3d065c76-d379-4549-a3ad-a624878305c6" containerID="df6c75063c832ab8e09c822989d7a7816081f5c99cebfa4641f1bbfeb9a87780" exitCode=0 Dec 06 06:52:37 crc kubenswrapper[4823]: I1206 06:52:37.220244 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdqmb" Dec 06 06:52:37 crc kubenswrapper[4823]: I1206 06:52:37.220268 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdqmb" event={"ID":"3d065c76-d379-4549-a3ad-a624878305c6","Type":"ContainerDied","Data":"df6c75063c832ab8e09c822989d7a7816081f5c99cebfa4641f1bbfeb9a87780"} Dec 06 06:52:37 crc kubenswrapper[4823]: I1206 06:52:37.221033 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdqmb" event={"ID":"3d065c76-d379-4549-a3ad-a624878305c6","Type":"ContainerDied","Data":"39bb3d1e36c708ab27f145cadfcca5a2ea486b9f6838b962f5ed81760c8bf5a2"} Dec 06 06:52:37 crc kubenswrapper[4823]: I1206 06:52:37.221067 4823 scope.go:117] "RemoveContainer" containerID="df6c75063c832ab8e09c822989d7a7816081f5c99cebfa4641f1bbfeb9a87780" Dec 06 06:52:37 crc kubenswrapper[4823]: I1206 06:52:37.222768 4823 generic.go:334] "Generic (PLEG): container finished" podID="cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488" containerID="b4706864bbbe6d8c178e012caf585a6d646511289a5db675eb7df390b9d23963" exitCode=0 Dec 06 06:52:37 crc kubenswrapper[4823]: I1206 06:52:37.222847 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r428q" event={"ID":"cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488","Type":"ContainerDied","Data":"b4706864bbbe6d8c178e012caf585a6d646511289a5db675eb7df390b9d23963"} Dec 06 06:52:37 crc kubenswrapper[4823]: I1206 06:52:37.264975 4823 scope.go:117] "RemoveContainer" containerID="ba3825e0180e96544d96d5429c2267d39ade11d956946fa4bdd168cec2ad844f" Dec 06 06:52:37 crc kubenswrapper[4823]: I1206 06:52:37.282401 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sdqmb"] Dec 06 06:52:37 crc kubenswrapper[4823]: I1206 06:52:37.290115 4823 scope.go:117] "RemoveContainer" containerID="82087bb584bdf73fe28db2c3fb4886381f4b125de0fcc158a2a08a43f109b980" Dec 06 06:52:37 crc kubenswrapper[4823]: I1206 06:52:37.292242 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sdqmb"] Dec 06 06:52:37 crc kubenswrapper[4823]: I1206 06:52:37.344537 4823 scope.go:117] "RemoveContainer" containerID="df6c75063c832ab8e09c822989d7a7816081f5c99cebfa4641f1bbfeb9a87780" Dec 06 06:52:37 crc kubenswrapper[4823]: E1206 06:52:37.345290 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df6c75063c832ab8e09c822989d7a7816081f5c99cebfa4641f1bbfeb9a87780\": container with ID starting with df6c75063c832ab8e09c822989d7a7816081f5c99cebfa4641f1bbfeb9a87780 not found: ID does not exist" containerID="df6c75063c832ab8e09c822989d7a7816081f5c99cebfa4641f1bbfeb9a87780" Dec 06 06:52:37 crc kubenswrapper[4823]: I1206 06:52:37.345352 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df6c75063c832ab8e09c822989d7a7816081f5c99cebfa4641f1bbfeb9a87780"} err="failed to get container status \"df6c75063c832ab8e09c822989d7a7816081f5c99cebfa4641f1bbfeb9a87780\": rpc error: code = NotFound desc = could not find container \"df6c75063c832ab8e09c822989d7a7816081f5c99cebfa4641f1bbfeb9a87780\": container with ID starting with df6c75063c832ab8e09c822989d7a7816081f5c99cebfa4641f1bbfeb9a87780 not found: ID does not exist" Dec 06 06:52:37 crc kubenswrapper[4823]: I1206 06:52:37.345393 4823 scope.go:117] "RemoveContainer" containerID="ba3825e0180e96544d96d5429c2267d39ade11d956946fa4bdd168cec2ad844f" Dec 06 06:52:37 crc kubenswrapper[4823]: E1206 06:52:37.345742 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba3825e0180e96544d96d5429c2267d39ade11d956946fa4bdd168cec2ad844f\": container with ID starting with ba3825e0180e96544d96d5429c2267d39ade11d956946fa4bdd168cec2ad844f not found: ID does not exist" containerID="ba3825e0180e96544d96d5429c2267d39ade11d956946fa4bdd168cec2ad844f" Dec 06 06:52:37 crc kubenswrapper[4823]: I1206 06:52:37.345782 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba3825e0180e96544d96d5429c2267d39ade11d956946fa4bdd168cec2ad844f"} err="failed to get container status \"ba3825e0180e96544d96d5429c2267d39ade11d956946fa4bdd168cec2ad844f\": rpc error: code = NotFound desc = could not find container \"ba3825e0180e96544d96d5429c2267d39ade11d956946fa4bdd168cec2ad844f\": container with ID starting with ba3825e0180e96544d96d5429c2267d39ade11d956946fa4bdd168cec2ad844f not found: ID does not exist" Dec 06 06:52:37 crc kubenswrapper[4823]: I1206 06:52:37.345811 4823 scope.go:117] "RemoveContainer" containerID="82087bb584bdf73fe28db2c3fb4886381f4b125de0fcc158a2a08a43f109b980" Dec 06 06:52:37 crc kubenswrapper[4823]: E1206 06:52:37.346017 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82087bb584bdf73fe28db2c3fb4886381f4b125de0fcc158a2a08a43f109b980\": container with ID starting with 82087bb584bdf73fe28db2c3fb4886381f4b125de0fcc158a2a08a43f109b980 not found: ID does not exist" containerID="82087bb584bdf73fe28db2c3fb4886381f4b125de0fcc158a2a08a43f109b980" Dec 06 06:52:37 crc kubenswrapper[4823]: I1206 06:52:37.346050 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82087bb584bdf73fe28db2c3fb4886381f4b125de0fcc158a2a08a43f109b980"} err="failed to get container status \"82087bb584bdf73fe28db2c3fb4886381f4b125de0fcc158a2a08a43f109b980\": rpc error: code = NotFound desc = could not find container \"82087bb584bdf73fe28db2c3fb4886381f4b125de0fcc158a2a08a43f109b980\": container with ID starting with 82087bb584bdf73fe28db2c3fb4886381f4b125de0fcc158a2a08a43f109b980 not found: ID does not exist" Dec 06 06:52:38 crc kubenswrapper[4823]: I1206 06:52:38.668253 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r428q" Dec 06 06:52:38 crc kubenswrapper[4823]: I1206 06:52:38.776758 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n946x\" (UniqueName: \"kubernetes.io/projected/cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488-kube-api-access-n946x\") pod \"cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488\" (UID: \"cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488\") " Dec 06 06:52:38 crc kubenswrapper[4823]: I1206 06:52:38.776996 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488-ssh-key\") pod \"cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488\" (UID: \"cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488\") " Dec 06 06:52:38 crc kubenswrapper[4823]: I1206 06:52:38.777018 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488-inventory\") pod \"cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488\" (UID: \"cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488\") " Dec 06 06:52:38 crc kubenswrapper[4823]: I1206 06:52:38.782657 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488-kube-api-access-n946x" (OuterVolumeSpecName: "kube-api-access-n946x") pod "cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488" (UID: "cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488"). InnerVolumeSpecName "kube-api-access-n946x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:52:38 crc kubenswrapper[4823]: I1206 06:52:38.807573 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488-inventory" (OuterVolumeSpecName: "inventory") pod "cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488" (UID: "cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:52:38 crc kubenswrapper[4823]: I1206 06:52:38.808015 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488" (UID: "cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:52:38 crc kubenswrapper[4823]: I1206 06:52:38.879741 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n946x\" (UniqueName: \"kubernetes.io/projected/cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488-kube-api-access-n946x\") on node \"crc\" DevicePath \"\"" Dec 06 06:52:38 crc kubenswrapper[4823]: I1206 06:52:38.879780 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:52:38 crc kubenswrapper[4823]: I1206 06:52:38.879792 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.155975 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d065c76-d379-4549-a3ad-a624878305c6" path="/var/lib/kubelet/pods/3d065c76-d379-4549-a3ad-a624878305c6/volumes" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.250763 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r428q" event={"ID":"cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488","Type":"ContainerDied","Data":"a00fd7c6b3e8d63bb9ef54fc70a362575a307c539eeeb5eb7ebd4222b7c84563"} Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.250811 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a00fd7c6b3e8d63bb9ef54fc70a362575a307c539eeeb5eb7ebd4222b7c84563" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.250823 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r428q" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.378516 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk"] Dec 06 06:52:39 crc kubenswrapper[4823]: E1206 06:52:39.379117 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d065c76-d379-4549-a3ad-a624878305c6" containerName="extract-content" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.379139 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d065c76-d379-4549-a3ad-a624878305c6" containerName="extract-content" Dec 06 06:52:39 crc kubenswrapper[4823]: E1206 06:52:39.379166 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d065c76-d379-4549-a3ad-a624878305c6" containerName="extract-utilities" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.379173 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d065c76-d379-4549-a3ad-a624878305c6" containerName="extract-utilities" Dec 06 06:52:39 crc kubenswrapper[4823]: E1206 06:52:39.379193 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d065c76-d379-4549-a3ad-a624878305c6" containerName="registry-server" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.379200 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d065c76-d379-4549-a3ad-a624878305c6" containerName="registry-server" Dec 06 06:52:39 crc kubenswrapper[4823]: E1206 06:52:39.379213 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.379220 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.379447 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.379471 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d065c76-d379-4549-a3ad-a624878305c6" containerName="registry-server" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.380407 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.383608 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.383808 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.386296 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xqh9k" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.390748 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.402274 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk"] Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.493804 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05c11f0c-8eda-4110-b929-b1ef19924e5e-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk\" (UID: \"05c11f0c-8eda-4110-b929-b1ef19924e5e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.493864 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05c11f0c-8eda-4110-b929-b1ef19924e5e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk\" (UID: \"05c11f0c-8eda-4110-b929-b1ef19924e5e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.493998 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcvnt\" (UniqueName: \"kubernetes.io/projected/05c11f0c-8eda-4110-b929-b1ef19924e5e-kube-api-access-qcvnt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk\" (UID: \"05c11f0c-8eda-4110-b929-b1ef19924e5e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.494023 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c11f0c-8eda-4110-b929-b1ef19924e5e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk\" (UID: \"05c11f0c-8eda-4110-b929-b1ef19924e5e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.596291 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05c11f0c-8eda-4110-b929-b1ef19924e5e-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk\" (UID: \"05c11f0c-8eda-4110-b929-b1ef19924e5e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.596421 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05c11f0c-8eda-4110-b929-b1ef19924e5e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk\" (UID: \"05c11f0c-8eda-4110-b929-b1ef19924e5e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.596566 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcvnt\" (UniqueName: \"kubernetes.io/projected/05c11f0c-8eda-4110-b929-b1ef19924e5e-kube-api-access-qcvnt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk\" (UID: \"05c11f0c-8eda-4110-b929-b1ef19924e5e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.596607 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c11f0c-8eda-4110-b929-b1ef19924e5e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk\" (UID: \"05c11f0c-8eda-4110-b929-b1ef19924e5e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.601492 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05c11f0c-8eda-4110-b929-b1ef19924e5e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk\" (UID: \"05c11f0c-8eda-4110-b929-b1ef19924e5e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.601997 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05c11f0c-8eda-4110-b929-b1ef19924e5e-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk\" (UID: \"05c11f0c-8eda-4110-b929-b1ef19924e5e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.602529 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c11f0c-8eda-4110-b929-b1ef19924e5e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk\" (UID: \"05c11f0c-8eda-4110-b929-b1ef19924e5e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.614848 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcvnt\" (UniqueName: \"kubernetes.io/projected/05c11f0c-8eda-4110-b929-b1ef19924e5e-kube-api-access-qcvnt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk\" (UID: \"05c11f0c-8eda-4110-b929-b1ef19924e5e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk" Dec 06 06:52:39 crc kubenswrapper[4823]: I1206 06:52:39.709071 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk" Dec 06 06:52:40 crc kubenswrapper[4823]: I1206 06:52:40.141922 4823 scope.go:117] "RemoveContainer" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" Dec 06 06:52:40 crc kubenswrapper[4823]: E1206 06:52:40.142342 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 06:52:40 crc kubenswrapper[4823]: I1206 06:52:40.207162 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk"] Dec 06 06:52:40 crc kubenswrapper[4823]: I1206 06:52:40.261264 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk" event={"ID":"05c11f0c-8eda-4110-b929-b1ef19924e5e","Type":"ContainerStarted","Data":"4f04d5be31a3dba78afb146c1dd48aba3123463a7abeb404acae8e38519ff5b6"} Dec 06 06:52:40 crc kubenswrapper[4823]: I1206 06:52:40.842707 4823 scope.go:117] "RemoveContainer" containerID="384b8c89c4ec153f147be679258ed92270d1089bbcb20e479a2687ddc87bacf8" Dec 06 06:52:42 crc kubenswrapper[4823]: I1206 06:52:42.288220 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk" event={"ID":"05c11f0c-8eda-4110-b929-b1ef19924e5e","Type":"ContainerStarted","Data":"04a1f0f8201b93688657c18a9d4274cca4f291595df41089084af5fa56a64bea"} Dec 06 06:52:42 crc kubenswrapper[4823]: I1206 06:52:42.328170 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk" podStartSLOduration=2.036760386 podStartE2EDuration="3.328141402s" podCreationTimestamp="2025-12-06 06:52:39 +0000 UTC" firstStartedPulling="2025-12-06 06:52:40.210539186 +0000 UTC m=+1661.496291146" lastFinishedPulling="2025-12-06 06:52:41.501920202 +0000 UTC m=+1662.787672162" observedRunningTime="2025-12-06 06:52:42.312859809 +0000 UTC m=+1663.598611789" watchObservedRunningTime="2025-12-06 06:52:42.328141402 +0000 UTC m=+1663.613893362" Dec 06 06:52:47 crc kubenswrapper[4823]: I1206 06:52:47.515486 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tbd49"] Dec 06 06:52:47 crc kubenswrapper[4823]: I1206 06:52:47.518157 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tbd49" Dec 06 06:52:47 crc kubenswrapper[4823]: I1206 06:52:47.525220 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tbd49"] Dec 06 06:52:47 crc kubenswrapper[4823]: I1206 06:52:47.705833 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvmps\" (UniqueName: \"kubernetes.io/projected/999538c6-8208-40e9-b2ff-1dcd27cace79-kube-api-access-wvmps\") pod \"certified-operators-tbd49\" (UID: \"999538c6-8208-40e9-b2ff-1dcd27cace79\") " pod="openshift-marketplace/certified-operators-tbd49" Dec 06 06:52:47 crc kubenswrapper[4823]: I1206 06:52:47.706241 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/999538c6-8208-40e9-b2ff-1dcd27cace79-catalog-content\") pod \"certified-operators-tbd49\" (UID: \"999538c6-8208-40e9-b2ff-1dcd27cace79\") " pod="openshift-marketplace/certified-operators-tbd49" Dec 06 06:52:47 crc kubenswrapper[4823]: I1206 06:52:47.706430 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/999538c6-8208-40e9-b2ff-1dcd27cace79-utilities\") pod \"certified-operators-tbd49\" (UID: \"999538c6-8208-40e9-b2ff-1dcd27cace79\") " pod="openshift-marketplace/certified-operators-tbd49" Dec 06 06:52:47 crc kubenswrapper[4823]: I1206 06:52:47.807929 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/999538c6-8208-40e9-b2ff-1dcd27cace79-utilities\") pod \"certified-operators-tbd49\" (UID: \"999538c6-8208-40e9-b2ff-1dcd27cace79\") " pod="openshift-marketplace/certified-operators-tbd49" Dec 06 06:52:47 crc kubenswrapper[4823]: I1206 06:52:47.808036 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvmps\" (UniqueName: \"kubernetes.io/projected/999538c6-8208-40e9-b2ff-1dcd27cace79-kube-api-access-wvmps\") pod \"certified-operators-tbd49\" (UID: \"999538c6-8208-40e9-b2ff-1dcd27cace79\") " pod="openshift-marketplace/certified-operators-tbd49" Dec 06 06:52:47 crc kubenswrapper[4823]: I1206 06:52:47.808127 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/999538c6-8208-40e9-b2ff-1dcd27cace79-catalog-content\") pod \"certified-operators-tbd49\" (UID: \"999538c6-8208-40e9-b2ff-1dcd27cace79\") " pod="openshift-marketplace/certified-operators-tbd49" Dec 06 06:52:47 crc kubenswrapper[4823]: I1206 06:52:47.808493 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/999538c6-8208-40e9-b2ff-1dcd27cace79-utilities\") pod \"certified-operators-tbd49\" (UID: \"999538c6-8208-40e9-b2ff-1dcd27cace79\") " pod="openshift-marketplace/certified-operators-tbd49" Dec 06 06:52:47 crc kubenswrapper[4823]: I1206 06:52:47.808622 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/999538c6-8208-40e9-b2ff-1dcd27cace79-catalog-content\") pod \"certified-operators-tbd49\" (UID: \"999538c6-8208-40e9-b2ff-1dcd27cace79\") " pod="openshift-marketplace/certified-operators-tbd49" Dec 06 06:52:47 crc kubenswrapper[4823]: I1206 06:52:47.831112 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvmps\" (UniqueName: \"kubernetes.io/projected/999538c6-8208-40e9-b2ff-1dcd27cace79-kube-api-access-wvmps\") pod \"certified-operators-tbd49\" (UID: \"999538c6-8208-40e9-b2ff-1dcd27cace79\") " pod="openshift-marketplace/certified-operators-tbd49" Dec 06 06:52:47 crc kubenswrapper[4823]: I1206 06:52:47.840627 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tbd49" Dec 06 06:52:48 crc kubenswrapper[4823]: I1206 06:52:48.350274 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tbd49"] Dec 06 06:52:48 crc kubenswrapper[4823]: I1206 06:52:48.366569 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tbd49" event={"ID":"999538c6-8208-40e9-b2ff-1dcd27cace79","Type":"ContainerStarted","Data":"b3df395f3aea86e80778508bb89b65600d4a6ecf67799d66a3a7e042f1c00150"} Dec 06 06:52:49 crc kubenswrapper[4823]: I1206 06:52:49.380496 4823 generic.go:334] "Generic (PLEG): container finished" podID="999538c6-8208-40e9-b2ff-1dcd27cace79" containerID="43ab73af3c69b750618cbccec585a39ae367095a992d20ccc1b6bb27de08d1ae" exitCode=0 Dec 06 06:52:49 crc kubenswrapper[4823]: I1206 06:52:49.380603 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tbd49" event={"ID":"999538c6-8208-40e9-b2ff-1dcd27cace79","Type":"ContainerDied","Data":"43ab73af3c69b750618cbccec585a39ae367095a992d20ccc1b6bb27de08d1ae"} Dec 06 06:52:50 crc kubenswrapper[4823]: I1206 06:52:50.395683 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tbd49" event={"ID":"999538c6-8208-40e9-b2ff-1dcd27cace79","Type":"ContainerStarted","Data":"91b8116b03bbf4345648ad898940756c105f0c26b3eb87002bb96481fc533cf1"} Dec 06 06:52:51 crc kubenswrapper[4823]: I1206 06:52:51.407289 4823 generic.go:334] "Generic (PLEG): container finished" podID="999538c6-8208-40e9-b2ff-1dcd27cace79" containerID="91b8116b03bbf4345648ad898940756c105f0c26b3eb87002bb96481fc533cf1" exitCode=0 Dec 06 06:52:51 crc kubenswrapper[4823]: I1206 06:52:51.407562 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tbd49" event={"ID":"999538c6-8208-40e9-b2ff-1dcd27cace79","Type":"ContainerDied","Data":"91b8116b03bbf4345648ad898940756c105f0c26b3eb87002bb96481fc533cf1"} Dec 06 06:52:52 crc kubenswrapper[4823]: I1206 06:52:52.431566 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tbd49" event={"ID":"999538c6-8208-40e9-b2ff-1dcd27cace79","Type":"ContainerStarted","Data":"76f2995cc8077b26f2101762b929d6df843e2185ce0ec2e36aadd799a4ecbfb3"} Dec 06 06:52:52 crc kubenswrapper[4823]: I1206 06:52:52.456480 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tbd49" podStartSLOduration=2.986872027 podStartE2EDuration="5.456449892s" podCreationTimestamp="2025-12-06 06:52:47 +0000 UTC" firstStartedPulling="2025-12-06 06:52:49.383141595 +0000 UTC m=+1670.668893555" lastFinishedPulling="2025-12-06 06:52:51.85271946 +0000 UTC m=+1673.138471420" observedRunningTime="2025-12-06 06:52:52.448164742 +0000 UTC m=+1673.733916702" watchObservedRunningTime="2025-12-06 06:52:52.456449892 +0000 UTC m=+1673.742201852" Dec 06 06:52:55 crc kubenswrapper[4823]: I1206 06:52:55.141045 4823 scope.go:117] "RemoveContainer" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" Dec 06 06:52:55 crc kubenswrapper[4823]: E1206 06:52:55.141877 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 06:52:57 crc kubenswrapper[4823]: I1206 06:52:57.841996 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tbd49" Dec 06 06:52:57 crc kubenswrapper[4823]: I1206 06:52:57.842336 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tbd49" Dec 06 06:52:57 crc kubenswrapper[4823]: I1206 06:52:57.895372 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tbd49" Dec 06 06:52:58 crc kubenswrapper[4823]: I1206 06:52:58.536986 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tbd49" Dec 06 06:52:58 crc kubenswrapper[4823]: I1206 06:52:58.594976 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tbd49"] Dec 06 06:53:00 crc kubenswrapper[4823]: I1206 06:53:00.511407 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tbd49" podUID="999538c6-8208-40e9-b2ff-1dcd27cace79" containerName="registry-server" containerID="cri-o://76f2995cc8077b26f2101762b929d6df843e2185ce0ec2e36aadd799a4ecbfb3" gracePeriod=2 Dec 06 06:53:01 crc kubenswrapper[4823]: I1206 06:53:01.007934 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tbd49" Dec 06 06:53:01 crc kubenswrapper[4823]: I1206 06:53:01.098981 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/999538c6-8208-40e9-b2ff-1dcd27cace79-catalog-content\") pod \"999538c6-8208-40e9-b2ff-1dcd27cace79\" (UID: \"999538c6-8208-40e9-b2ff-1dcd27cace79\") " Dec 06 06:53:01 crc kubenswrapper[4823]: I1206 06:53:01.099150 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvmps\" (UniqueName: \"kubernetes.io/projected/999538c6-8208-40e9-b2ff-1dcd27cace79-kube-api-access-wvmps\") pod \"999538c6-8208-40e9-b2ff-1dcd27cace79\" (UID: \"999538c6-8208-40e9-b2ff-1dcd27cace79\") " Dec 06 06:53:01 crc kubenswrapper[4823]: I1206 06:53:01.099324 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/999538c6-8208-40e9-b2ff-1dcd27cace79-utilities\") pod \"999538c6-8208-40e9-b2ff-1dcd27cace79\" (UID: \"999538c6-8208-40e9-b2ff-1dcd27cace79\") " Dec 06 06:53:01 crc kubenswrapper[4823]: I1206 06:53:01.100438 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/999538c6-8208-40e9-b2ff-1dcd27cace79-utilities" (OuterVolumeSpecName: "utilities") pod "999538c6-8208-40e9-b2ff-1dcd27cace79" (UID: "999538c6-8208-40e9-b2ff-1dcd27cace79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:53:01 crc kubenswrapper[4823]: I1206 06:53:01.114028 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/999538c6-8208-40e9-b2ff-1dcd27cace79-kube-api-access-wvmps" (OuterVolumeSpecName: "kube-api-access-wvmps") pod "999538c6-8208-40e9-b2ff-1dcd27cace79" (UID: "999538c6-8208-40e9-b2ff-1dcd27cace79"). InnerVolumeSpecName "kube-api-access-wvmps". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:53:01 crc kubenswrapper[4823]: I1206 06:53:01.160299 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/999538c6-8208-40e9-b2ff-1dcd27cace79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "999538c6-8208-40e9-b2ff-1dcd27cace79" (UID: "999538c6-8208-40e9-b2ff-1dcd27cace79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:53:01 crc kubenswrapper[4823]: I1206 06:53:01.203213 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvmps\" (UniqueName: \"kubernetes.io/projected/999538c6-8208-40e9-b2ff-1dcd27cace79-kube-api-access-wvmps\") on node \"crc\" DevicePath \"\"" Dec 06 06:53:01 crc kubenswrapper[4823]: I1206 06:53:01.203250 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/999538c6-8208-40e9-b2ff-1dcd27cace79-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:53:01 crc kubenswrapper[4823]: I1206 06:53:01.203261 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/999538c6-8208-40e9-b2ff-1dcd27cace79-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:53:01 crc kubenswrapper[4823]: I1206 06:53:01.525161 4823 generic.go:334] "Generic (PLEG): container finished" podID="999538c6-8208-40e9-b2ff-1dcd27cace79" containerID="76f2995cc8077b26f2101762b929d6df843e2185ce0ec2e36aadd799a4ecbfb3" exitCode=0 Dec 06 06:53:01 crc kubenswrapper[4823]: I1206 06:53:01.525211 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tbd49" event={"ID":"999538c6-8208-40e9-b2ff-1dcd27cace79","Type":"ContainerDied","Data":"76f2995cc8077b26f2101762b929d6df843e2185ce0ec2e36aadd799a4ecbfb3"} Dec 06 06:53:01 crc kubenswrapper[4823]: I1206 06:53:01.525251 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tbd49" event={"ID":"999538c6-8208-40e9-b2ff-1dcd27cace79","Type":"ContainerDied","Data":"b3df395f3aea86e80778508bb89b65600d4a6ecf67799d66a3a7e042f1c00150"} Dec 06 06:53:01 crc kubenswrapper[4823]: I1206 06:53:01.525274 4823 scope.go:117] "RemoveContainer" containerID="76f2995cc8077b26f2101762b929d6df843e2185ce0ec2e36aadd799a4ecbfb3" Dec 06 06:53:01 crc kubenswrapper[4823]: I1206 06:53:01.525217 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tbd49" Dec 06 06:53:01 crc kubenswrapper[4823]: I1206 06:53:01.557562 4823 scope.go:117] "RemoveContainer" containerID="91b8116b03bbf4345648ad898940756c105f0c26b3eb87002bb96481fc533cf1" Dec 06 06:53:01 crc kubenswrapper[4823]: I1206 06:53:01.572292 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tbd49"] Dec 06 06:53:01 crc kubenswrapper[4823]: I1206 06:53:01.582451 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tbd49"] Dec 06 06:53:01 crc kubenswrapper[4823]: I1206 06:53:01.601861 4823 scope.go:117] "RemoveContainer" containerID="43ab73af3c69b750618cbccec585a39ae367095a992d20ccc1b6bb27de08d1ae" Dec 06 06:53:01 crc kubenswrapper[4823]: I1206 06:53:01.655105 4823 scope.go:117] "RemoveContainer" containerID="76f2995cc8077b26f2101762b929d6df843e2185ce0ec2e36aadd799a4ecbfb3" Dec 06 06:53:01 crc kubenswrapper[4823]: E1206 06:53:01.655589 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76f2995cc8077b26f2101762b929d6df843e2185ce0ec2e36aadd799a4ecbfb3\": container with ID starting with 76f2995cc8077b26f2101762b929d6df843e2185ce0ec2e36aadd799a4ecbfb3 not found: ID does not exist" containerID="76f2995cc8077b26f2101762b929d6df843e2185ce0ec2e36aadd799a4ecbfb3" Dec 06 06:53:01 crc kubenswrapper[4823]: I1206 06:53:01.655640 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76f2995cc8077b26f2101762b929d6df843e2185ce0ec2e36aadd799a4ecbfb3"} err="failed to get container status \"76f2995cc8077b26f2101762b929d6df843e2185ce0ec2e36aadd799a4ecbfb3\": rpc error: code = NotFound desc = could not find container \"76f2995cc8077b26f2101762b929d6df843e2185ce0ec2e36aadd799a4ecbfb3\": container with ID starting with 76f2995cc8077b26f2101762b929d6df843e2185ce0ec2e36aadd799a4ecbfb3 not found: ID does not exist" Dec 06 06:53:01 crc kubenswrapper[4823]: I1206 06:53:01.655687 4823 scope.go:117] "RemoveContainer" containerID="91b8116b03bbf4345648ad898940756c105f0c26b3eb87002bb96481fc533cf1" Dec 06 06:53:01 crc kubenswrapper[4823]: E1206 06:53:01.656098 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91b8116b03bbf4345648ad898940756c105f0c26b3eb87002bb96481fc533cf1\": container with ID starting with 91b8116b03bbf4345648ad898940756c105f0c26b3eb87002bb96481fc533cf1 not found: ID does not exist" containerID="91b8116b03bbf4345648ad898940756c105f0c26b3eb87002bb96481fc533cf1" Dec 06 06:53:01 crc kubenswrapper[4823]: I1206 06:53:01.656148 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91b8116b03bbf4345648ad898940756c105f0c26b3eb87002bb96481fc533cf1"} err="failed to get container status \"91b8116b03bbf4345648ad898940756c105f0c26b3eb87002bb96481fc533cf1\": rpc error: code = NotFound desc = could not find container \"91b8116b03bbf4345648ad898940756c105f0c26b3eb87002bb96481fc533cf1\": container with ID starting with 91b8116b03bbf4345648ad898940756c105f0c26b3eb87002bb96481fc533cf1 not found: ID does not exist" Dec 06 06:53:01 crc kubenswrapper[4823]: I1206 06:53:01.656179 4823 scope.go:117] "RemoveContainer" containerID="43ab73af3c69b750618cbccec585a39ae367095a992d20ccc1b6bb27de08d1ae" Dec 06 06:53:01 crc kubenswrapper[4823]: E1206 06:53:01.656631 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43ab73af3c69b750618cbccec585a39ae367095a992d20ccc1b6bb27de08d1ae\": container with ID starting with 43ab73af3c69b750618cbccec585a39ae367095a992d20ccc1b6bb27de08d1ae not found: ID does not exist" containerID="43ab73af3c69b750618cbccec585a39ae367095a992d20ccc1b6bb27de08d1ae" Dec 06 06:53:01 crc kubenswrapper[4823]: I1206 06:53:01.656671 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43ab73af3c69b750618cbccec585a39ae367095a992d20ccc1b6bb27de08d1ae"} err="failed to get container status \"43ab73af3c69b750618cbccec585a39ae367095a992d20ccc1b6bb27de08d1ae\": rpc error: code = NotFound desc = could not find container \"43ab73af3c69b750618cbccec585a39ae367095a992d20ccc1b6bb27de08d1ae\": container with ID starting with 43ab73af3c69b750618cbccec585a39ae367095a992d20ccc1b6bb27de08d1ae not found: ID does not exist" Dec 06 06:53:03 crc kubenswrapper[4823]: I1206 06:53:03.154641 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="999538c6-8208-40e9-b2ff-1dcd27cace79" path="/var/lib/kubelet/pods/999538c6-8208-40e9-b2ff-1dcd27cace79/volumes" Dec 06 06:53:07 crc kubenswrapper[4823]: I1206 06:53:07.141310 4823 scope.go:117] "RemoveContainer" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" Dec 06 06:53:07 crc kubenswrapper[4823]: E1206 06:53:07.142167 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 06:53:19 crc kubenswrapper[4823]: I1206 06:53:19.149053 4823 scope.go:117] "RemoveContainer" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" Dec 06 06:53:19 crc kubenswrapper[4823]: E1206 06:53:19.149981 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 06:53:32 crc kubenswrapper[4823]: I1206 06:53:32.141381 4823 scope.go:117] "RemoveContainer" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" Dec 06 06:53:32 crc kubenswrapper[4823]: E1206 06:53:32.142333 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 06:53:43 crc kubenswrapper[4823]: I1206 06:53:43.140881 4823 scope.go:117] "RemoveContainer" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" Dec 06 06:53:43 crc kubenswrapper[4823]: E1206 06:53:43.141747 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 06:53:55 crc kubenswrapper[4823]: I1206 06:53:55.143261 4823 scope.go:117] "RemoveContainer" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" Dec 06 06:53:55 crc kubenswrapper[4823]: E1206 06:53:55.146835 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 06:54:09 crc kubenswrapper[4823]: I1206 06:54:09.152630 4823 scope.go:117] "RemoveContainer" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" Dec 06 06:54:09 crc kubenswrapper[4823]: E1206 06:54:09.154736 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 06:54:23 crc kubenswrapper[4823]: I1206 06:54:23.141185 4823 scope.go:117] "RemoveContainer" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" Dec 06 06:54:23 crc kubenswrapper[4823]: E1206 06:54:23.141926 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 06:54:32 crc kubenswrapper[4823]: I1206 06:54:32.045341 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-13a0-account-create-update-rnzvp"] Dec 06 06:54:32 crc kubenswrapper[4823]: I1206 06:54:32.054828 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-13a0-account-create-update-rnzvp"] Dec 06 06:54:33 crc kubenswrapper[4823]: I1206 06:54:33.153729 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b2687bc-8979-48a0-8a02-8e6cd5f62b0b" path="/var/lib/kubelet/pods/6b2687bc-8979-48a0-8a02-8e6cd5f62b0b/volumes" Dec 06 06:54:37 crc kubenswrapper[4823]: I1206 06:54:37.141686 4823 scope.go:117] "RemoveContainer" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" Dec 06 06:54:37 crc kubenswrapper[4823]: E1206 06:54:37.142246 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 06:54:40 crc kubenswrapper[4823]: I1206 06:54:40.989861 4823 scope.go:117] "RemoveContainer" containerID="b7456e8e24167f3bdbcf9e233f23fc895fdd5494870dfc7c190ee24d72e289df" Dec 06 06:54:44 crc kubenswrapper[4823]: I1206 06:54:44.036976 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-cf22j"] Dec 06 06:54:44 crc kubenswrapper[4823]: I1206 06:54:44.050129 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-xlp49"] Dec 06 06:54:44 crc kubenswrapper[4823]: I1206 06:54:44.062969 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-3d74-account-create-update-kz9kx"] Dec 06 06:54:44 crc kubenswrapper[4823]: I1206 06:54:44.072820 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-xlp49"] Dec 06 06:54:44 crc kubenswrapper[4823]: I1206 06:54:44.083137 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-3d74-account-create-update-kz9kx"] Dec 06 06:54:44 crc kubenswrapper[4823]: I1206 06:54:44.092802 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-cf22j"] Dec 06 06:54:44 crc kubenswrapper[4823]: I1206 06:54:44.102971 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-f6wtx"] Dec 06 06:54:44 crc kubenswrapper[4823]: I1206 06:54:44.124793 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-3a9a-account-create-update-ckvgt"] Dec 06 06:54:44 crc kubenswrapper[4823]: I1206 06:54:44.134647 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-f6wtx"] Dec 06 06:54:44 crc kubenswrapper[4823]: I1206 06:54:44.144977 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-3a9a-account-create-update-ckvgt"] Dec 06 06:54:45 crc kubenswrapper[4823]: I1206 06:54:45.154534 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09e414a4-1aba-4797-a384-ed802cb06e0c" path="/var/lib/kubelet/pods/09e414a4-1aba-4797-a384-ed802cb06e0c/volumes" Dec 06 06:54:45 crc kubenswrapper[4823]: I1206 06:54:45.155369 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1333c4a8-9d88-4ba6-b00c-22b790673422" path="/var/lib/kubelet/pods/1333c4a8-9d88-4ba6-b00c-22b790673422/volumes" Dec 06 06:54:45 crc kubenswrapper[4823]: I1206 06:54:45.156182 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9da3e65f-fbe0-4373-a601-8408a5f4f033" path="/var/lib/kubelet/pods/9da3e65f-fbe0-4373-a601-8408a5f4f033/volumes" Dec 06 06:54:45 crc kubenswrapper[4823]: I1206 06:54:45.156893 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2f3c4a2-8111-4fcc-a387-e91f191804e8" path="/var/lib/kubelet/pods/b2f3c4a2-8111-4fcc-a387-e91f191804e8/volumes" Dec 06 06:54:45 crc kubenswrapper[4823]: I1206 06:54:45.158339 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef7b8301-6cf7-4fbf-8968-e26088f8b144" path="/var/lib/kubelet/pods/ef7b8301-6cf7-4fbf-8968-e26088f8b144/volumes" Dec 06 06:54:52 crc kubenswrapper[4823]: I1206 06:54:52.141311 4823 scope.go:117] "RemoveContainer" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" Dec 06 06:54:52 crc kubenswrapper[4823]: E1206 06:54:52.142203 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 06:55:03 crc kubenswrapper[4823]: I1206 06:55:03.141032 4823 scope.go:117] "RemoveContainer" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" Dec 06 06:55:03 crc kubenswrapper[4823]: E1206 06:55:03.141854 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 06:55:15 crc kubenswrapper[4823]: I1206 06:55:15.141714 4823 scope.go:117] "RemoveContainer" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" Dec 06 06:55:15 crc kubenswrapper[4823]: E1206 06:55:15.142719 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 06:55:17 crc kubenswrapper[4823]: I1206 06:55:17.043916 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-6x8x9"] Dec 06 06:55:17 crc kubenswrapper[4823]: I1206 06:55:17.057081 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-6x8x9"] Dec 06 06:55:17 crc kubenswrapper[4823]: I1206 06:55:17.158128 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="222d27f2-d83e-4213-b3f4-83dd6c5d14e7" path="/var/lib/kubelet/pods/222d27f2-d83e-4213-b3f4-83dd6c5d14e7/volumes" Dec 06 06:55:18 crc kubenswrapper[4823]: I1206 06:55:18.033676 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-hq4mm"] Dec 06 06:55:18 crc kubenswrapper[4823]: I1206 06:55:18.046567 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-hq4mm"] Dec 06 06:55:19 crc kubenswrapper[4823]: I1206 06:55:19.042396 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-205f-account-create-update-fk4rx"] Dec 06 06:55:19 crc kubenswrapper[4823]: I1206 06:55:19.055180 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-7eb5-account-create-update-pdxbm"] Dec 06 06:55:19 crc kubenswrapper[4823]: I1206 06:55:19.067216 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-205f-account-create-update-fk4rx"] Dec 06 06:55:19 crc kubenswrapper[4823]: I1206 06:55:19.079881 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-7eb5-account-create-update-pdxbm"] Dec 06 06:55:19 crc kubenswrapper[4823]: I1206 06:55:19.156633 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94ad2bfb-9313-4afb-84aa-b42f108da314" path="/var/lib/kubelet/pods/94ad2bfb-9313-4afb-84aa-b42f108da314/volumes" Dec 06 06:55:19 crc kubenswrapper[4823]: I1206 06:55:19.157770 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9a360db-ef02-428f-85fc-2470c362c39e" path="/var/lib/kubelet/pods/e9a360db-ef02-428f-85fc-2470c362c39e/volumes" Dec 06 06:55:19 crc kubenswrapper[4823]: I1206 06:55:19.158790 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fca812df-30ed-47ad-9a3f-5fbb17d7032d" path="/var/lib/kubelet/pods/fca812df-30ed-47ad-9a3f-5fbb17d7032d/volumes" Dec 06 06:55:25 crc kubenswrapper[4823]: I1206 06:55:25.044974 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-0edd-account-create-update-n4hqb"] Dec 06 06:55:25 crc kubenswrapper[4823]: I1206 06:55:25.061216 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-00ee-account-create-update-x65sw"] Dec 06 06:55:25 crc kubenswrapper[4823]: I1206 06:55:25.082297 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-00ee-account-create-update-x65sw"] Dec 06 06:55:25 crc kubenswrapper[4823]: I1206 06:55:25.094482 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-0edd-account-create-update-n4hqb"] Dec 06 06:55:25 crc kubenswrapper[4823]: I1206 06:55:25.154608 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f2325c2-986c-4b21-b734-6e4a6b0c8199" path="/var/lib/kubelet/pods/3f2325c2-986c-4b21-b734-6e4a6b0c8199/volumes" Dec 06 06:55:25 crc kubenswrapper[4823]: I1206 06:55:25.155493 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75d32988-ae9c-4b85-ac33-e847bebb88c9" path="/var/lib/kubelet/pods/75d32988-ae9c-4b85-ac33-e847bebb88c9/volumes" Dec 06 06:55:29 crc kubenswrapper[4823]: I1206 06:55:29.153628 4823 scope.go:117] "RemoveContainer" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" Dec 06 06:55:29 crc kubenswrapper[4823]: E1206 06:55:29.154523 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 06:55:30 crc kubenswrapper[4823]: I1206 06:55:30.032401 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-clm65"] Dec 06 06:55:30 crc kubenswrapper[4823]: I1206 06:55:30.043419 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-clm65"] Dec 06 06:55:31 crc kubenswrapper[4823]: I1206 06:55:31.031772 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-dchbm"] Dec 06 06:55:31 crc kubenswrapper[4823]: I1206 06:55:31.044281 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-dchbm"] Dec 06 06:55:31 crc kubenswrapper[4823]: I1206 06:55:31.155652 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06696730-e439-443b-a5f9-55ea9b90107f" path="/var/lib/kubelet/pods/06696730-e439-443b-a5f9-55ea9b90107f/volumes" Dec 06 06:55:31 crc kubenswrapper[4823]: I1206 06:55:31.158002 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98060624-4d67-42df-ba2a-5f70c05200c1" path="/var/lib/kubelet/pods/98060624-4d67-42df-ba2a-5f70c05200c1/volumes" Dec 06 06:55:38 crc kubenswrapper[4823]: I1206 06:55:38.045928 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-bll7z"] Dec 06 06:55:38 crc kubenswrapper[4823]: I1206 06:55:38.055021 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-bll7z"] Dec 06 06:55:39 crc kubenswrapper[4823]: I1206 06:55:39.152343 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab7a2652-1281-4158-8bac-c547abce2fed" path="/var/lib/kubelet/pods/ab7a2652-1281-4158-8bac-c547abce2fed/volumes" Dec 06 06:55:40 crc kubenswrapper[4823]: I1206 06:55:40.141368 4823 scope.go:117] "RemoveContainer" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" Dec 06 06:55:40 crc kubenswrapper[4823]: E1206 06:55:40.141999 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 06:55:41 crc kubenswrapper[4823]: I1206 06:55:41.065941 4823 scope.go:117] "RemoveContainer" containerID="498fdfef078b7efaf30fff9e77af1b0a1f26f1a143440a3210e34fd12e32fd89" Dec 06 06:55:41 crc kubenswrapper[4823]: I1206 06:55:41.101878 4823 scope.go:117] "RemoveContainer" containerID="f57a11193ba25165963caae7db61751322a38eee89586673547181edfc2f461e" Dec 06 06:55:41 crc kubenswrapper[4823]: I1206 06:55:41.155752 4823 scope.go:117] "RemoveContainer" containerID="51c5325568e84084ee1ffbade411a2e00087f50b058a992d46b0f2b2ce3d45af" Dec 06 06:55:41 crc kubenswrapper[4823]: I1206 06:55:41.219808 4823 scope.go:117] "RemoveContainer" containerID="8af0c28e888fa4d5793ae5d4e506f85d3334a9e47e2f248128efd535ab7bace7" Dec 06 06:55:41 crc kubenswrapper[4823]: I1206 06:55:41.264364 4823 scope.go:117] "RemoveContainer" containerID="b2b15653f86c5efa5017573c825e3bbff3e0b2078dd04d58c0ff800a5de4171d" Dec 06 06:55:41 crc kubenswrapper[4823]: I1206 06:55:41.296265 4823 scope.go:117] "RemoveContainer" containerID="e931905d8f8211e215f09f286239ac6baf8f48ca0741160a528ae6a669d737e3" Dec 06 06:55:41 crc kubenswrapper[4823]: I1206 06:55:41.342067 4823 scope.go:117] "RemoveContainer" containerID="dd032ea1b34c20c26087e9ca7492ca3cb722ba5c9b47af1dafdf20ece180f4f6" Dec 06 06:55:41 crc kubenswrapper[4823]: I1206 06:55:41.370263 4823 scope.go:117] "RemoveContainer" containerID="1a122802b70e127930b1a12cb9e773513a90c35d2af3abb1e23937d03777ddd5" Dec 06 06:55:41 crc kubenswrapper[4823]: I1206 06:55:41.429676 4823 scope.go:117] "RemoveContainer" containerID="78f023ea41c0203f1edc73c59c63a12b5ad719ce0c49b7f8dfcadb9f7d82bc23" Dec 06 06:55:41 crc kubenswrapper[4823]: I1206 06:55:41.456845 4823 scope.go:117] "RemoveContainer" containerID="952e122e5acfa4a9c40450f07a21851624a9f337c5fadeee05cfbdd6e8b2679e" Dec 06 06:55:41 crc kubenswrapper[4823]: I1206 06:55:41.478613 4823 scope.go:117] "RemoveContainer" containerID="f97d97e5393744c22f11951867f76336de382347ec9e67ab7fff7f1a68190418" Dec 06 06:55:41 crc kubenswrapper[4823]: I1206 06:55:41.556268 4823 scope.go:117] "RemoveContainer" containerID="49c00b0e0b09e0c060278189c9ac44edfd229cb509c2153cf29b9a32e0fb40c0" Dec 06 06:55:41 crc kubenswrapper[4823]: I1206 06:55:41.578623 4823 scope.go:117] "RemoveContainer" containerID="2ad2eda1dcaa3475ce2af7abe256e8608e7c854d8151a7bdc22c8064ffb15ea2" Dec 06 06:55:41 crc kubenswrapper[4823]: I1206 06:55:41.605227 4823 scope.go:117] "RemoveContainer" containerID="9ef8800e027d1803b9587c5b9e9e16dd2ca3ebdb5568c45d946d1b887723892a" Dec 06 06:55:41 crc kubenswrapper[4823]: I1206 06:55:41.625299 4823 scope.go:117] "RemoveContainer" containerID="116de26f1fcdbdb30a27bac3e27e6a2cddc81a132a2e1fdc19500fd12a751e43" Dec 06 06:55:41 crc kubenswrapper[4823]: I1206 06:55:41.664621 4823 scope.go:117] "RemoveContainer" containerID="1a7a6ef9c112f5f304cca722950a035aa057a7d9023c2567facbd46c1f07ec76" Dec 06 06:55:41 crc kubenswrapper[4823]: I1206 06:55:41.694148 4823 scope.go:117] "RemoveContainer" containerID="47f5076437a15a6d4897a3eb70180aab41c23429617ddc77a4bfe855ec3009db" Dec 06 06:55:41 crc kubenswrapper[4823]: I1206 06:55:41.716832 4823 scope.go:117] "RemoveContainer" containerID="6307f4bcacfe8cdbf727c30f50d21ea17b715cafdf4ce2ceb913a6affe48dacd" Dec 06 06:55:41 crc kubenswrapper[4823]: I1206 06:55:41.739808 4823 scope.go:117] "RemoveContainer" containerID="3c93202077f81c3eeeed3933b988983828e57bf8646ec2075f872ae4a90dbe46" Dec 06 06:55:41 crc kubenswrapper[4823]: I1206 06:55:41.763788 4823 scope.go:117] "RemoveContainer" containerID="30524077567ae87bcaec47e7b4175d67199aca26c76248b69ef70604bb32a422" Dec 06 06:55:53 crc kubenswrapper[4823]: I1206 06:55:53.142057 4823 scope.go:117] "RemoveContainer" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" Dec 06 06:55:53 crc kubenswrapper[4823]: E1206 06:55:53.142926 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 06:56:05 crc kubenswrapper[4823]: I1206 06:56:05.582892 4823 generic.go:334] "Generic (PLEG): container finished" podID="05c11f0c-8eda-4110-b929-b1ef19924e5e" containerID="04a1f0f8201b93688657c18a9d4274cca4f291595df41089084af5fa56a64bea" exitCode=0 Dec 06 06:56:05 crc kubenswrapper[4823]: I1206 06:56:05.583006 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk" event={"ID":"05c11f0c-8eda-4110-b929-b1ef19924e5e","Type":"ContainerDied","Data":"04a1f0f8201b93688657c18a9d4274cca4f291595df41089084af5fa56a64bea"} Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.075569 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.113789 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05c11f0c-8eda-4110-b929-b1ef19924e5e-ssh-key\") pod \"05c11f0c-8eda-4110-b929-b1ef19924e5e\" (UID: \"05c11f0c-8eda-4110-b929-b1ef19924e5e\") " Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.113872 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcvnt\" (UniqueName: \"kubernetes.io/projected/05c11f0c-8eda-4110-b929-b1ef19924e5e-kube-api-access-qcvnt\") pod \"05c11f0c-8eda-4110-b929-b1ef19924e5e\" (UID: \"05c11f0c-8eda-4110-b929-b1ef19924e5e\") " Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.113950 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c11f0c-8eda-4110-b929-b1ef19924e5e-bootstrap-combined-ca-bundle\") pod \"05c11f0c-8eda-4110-b929-b1ef19924e5e\" (UID: \"05c11f0c-8eda-4110-b929-b1ef19924e5e\") " Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.114033 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05c11f0c-8eda-4110-b929-b1ef19924e5e-inventory\") pod \"05c11f0c-8eda-4110-b929-b1ef19924e5e\" (UID: \"05c11f0c-8eda-4110-b929-b1ef19924e5e\") " Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.120043 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05c11f0c-8eda-4110-b929-b1ef19924e5e-kube-api-access-qcvnt" (OuterVolumeSpecName: "kube-api-access-qcvnt") pod "05c11f0c-8eda-4110-b929-b1ef19924e5e" (UID: "05c11f0c-8eda-4110-b929-b1ef19924e5e"). InnerVolumeSpecName "kube-api-access-qcvnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.121463 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05c11f0c-8eda-4110-b929-b1ef19924e5e-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "05c11f0c-8eda-4110-b929-b1ef19924e5e" (UID: "05c11f0c-8eda-4110-b929-b1ef19924e5e"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.144856 4823 scope.go:117] "RemoveContainer" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" Dec 06 06:56:07 crc kubenswrapper[4823]: E1206 06:56:07.145550 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.150641 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05c11f0c-8eda-4110-b929-b1ef19924e5e-inventory" (OuterVolumeSpecName: "inventory") pod "05c11f0c-8eda-4110-b929-b1ef19924e5e" (UID: "05c11f0c-8eda-4110-b929-b1ef19924e5e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.152146 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05c11f0c-8eda-4110-b929-b1ef19924e5e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "05c11f0c-8eda-4110-b929-b1ef19924e5e" (UID: "05c11f0c-8eda-4110-b929-b1ef19924e5e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.215931 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05c11f0c-8eda-4110-b929-b1ef19924e5e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.216106 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcvnt\" (UniqueName: \"kubernetes.io/projected/05c11f0c-8eda-4110-b929-b1ef19924e5e-kube-api-access-qcvnt\") on node \"crc\" DevicePath \"\"" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.216247 4823 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c11f0c-8eda-4110-b929-b1ef19924e5e-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.216316 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05c11f0c-8eda-4110-b929-b1ef19924e5e-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.605976 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk" event={"ID":"05c11f0c-8eda-4110-b929-b1ef19924e5e","Type":"ContainerDied","Data":"4f04d5be31a3dba78afb146c1dd48aba3123463a7abeb404acae8e38519ff5b6"} Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.606060 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.606065 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f04d5be31a3dba78afb146c1dd48aba3123463a7abeb404acae8e38519ff5b6" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.706438 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5"] Dec 06 06:56:07 crc kubenswrapper[4823]: E1206 06:56:07.707014 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="999538c6-8208-40e9-b2ff-1dcd27cace79" containerName="extract-utilities" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.707040 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="999538c6-8208-40e9-b2ff-1dcd27cace79" containerName="extract-utilities" Dec 06 06:56:07 crc kubenswrapper[4823]: E1206 06:56:07.707061 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="999538c6-8208-40e9-b2ff-1dcd27cace79" containerName="extract-content" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.707071 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="999538c6-8208-40e9-b2ff-1dcd27cace79" containerName="extract-content" Dec 06 06:56:07 crc kubenswrapper[4823]: E1206 06:56:07.707091 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c11f0c-8eda-4110-b929-b1ef19924e5e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.707102 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c11f0c-8eda-4110-b929-b1ef19924e5e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 06 06:56:07 crc kubenswrapper[4823]: E1206 06:56:07.707153 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="999538c6-8208-40e9-b2ff-1dcd27cace79" containerName="registry-server" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.707163 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="999538c6-8208-40e9-b2ff-1dcd27cace79" containerName="registry-server" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.707406 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c11f0c-8eda-4110-b929-b1ef19924e5e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.707440 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="999538c6-8208-40e9-b2ff-1dcd27cace79" containerName="registry-server" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.708429 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.711512 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.712204 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xqh9k" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.712211 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.719154 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5"] Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.719583 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.735964 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7567b412-7ee9-413a-999b-ac4525e10bfa-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5\" (UID: \"7567b412-7ee9-413a-999b-ac4525e10bfa\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.736052 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgxft\" (UniqueName: \"kubernetes.io/projected/7567b412-7ee9-413a-999b-ac4525e10bfa-kube-api-access-fgxft\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5\" (UID: \"7567b412-7ee9-413a-999b-ac4525e10bfa\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.736102 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7567b412-7ee9-413a-999b-ac4525e10bfa-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5\" (UID: \"7567b412-7ee9-413a-999b-ac4525e10bfa\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.837563 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7567b412-7ee9-413a-999b-ac4525e10bfa-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5\" (UID: \"7567b412-7ee9-413a-999b-ac4525e10bfa\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.837785 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7567b412-7ee9-413a-999b-ac4525e10bfa-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5\" (UID: \"7567b412-7ee9-413a-999b-ac4525e10bfa\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.837817 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgxft\" (UniqueName: \"kubernetes.io/projected/7567b412-7ee9-413a-999b-ac4525e10bfa-kube-api-access-fgxft\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5\" (UID: \"7567b412-7ee9-413a-999b-ac4525e10bfa\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.841814 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7567b412-7ee9-413a-999b-ac4525e10bfa-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5\" (UID: \"7567b412-7ee9-413a-999b-ac4525e10bfa\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.841929 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7567b412-7ee9-413a-999b-ac4525e10bfa-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5\" (UID: \"7567b412-7ee9-413a-999b-ac4525e10bfa\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5" Dec 06 06:56:07 crc kubenswrapper[4823]: I1206 06:56:07.858583 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgxft\" (UniqueName: \"kubernetes.io/projected/7567b412-7ee9-413a-999b-ac4525e10bfa-kube-api-access-fgxft\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5\" (UID: \"7567b412-7ee9-413a-999b-ac4525e10bfa\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5" Dec 06 06:56:08 crc kubenswrapper[4823]: I1206 06:56:08.087183 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5" Dec 06 06:56:08 crc kubenswrapper[4823]: I1206 06:56:08.733192 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5"] Dec 06 06:56:09 crc kubenswrapper[4823]: I1206 06:56:09.625514 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5" event={"ID":"7567b412-7ee9-413a-999b-ac4525e10bfa","Type":"ContainerStarted","Data":"519cbfdb2d47122c3b8530d41d968eeaa8c2e1746eca1523772e0d3396fb4710"} Dec 06 06:56:09 crc kubenswrapper[4823]: I1206 06:56:09.625902 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5" event={"ID":"7567b412-7ee9-413a-999b-ac4525e10bfa","Type":"ContainerStarted","Data":"903cf514ae686604ed6f023220e5e34f35c3d086abccd2ec20083287e8f4df67"} Dec 06 06:56:09 crc kubenswrapper[4823]: I1206 06:56:09.649096 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5" podStartSLOduration=2.083803746 podStartE2EDuration="2.649064753s" podCreationTimestamp="2025-12-06 06:56:07 +0000 UTC" firstStartedPulling="2025-12-06 06:56:08.742209598 +0000 UTC m=+1870.027961548" lastFinishedPulling="2025-12-06 06:56:09.307470595 +0000 UTC m=+1870.593222555" observedRunningTime="2025-12-06 06:56:09.641898515 +0000 UTC m=+1870.927650475" watchObservedRunningTime="2025-12-06 06:56:09.649064753 +0000 UTC m=+1870.934816713" Dec 06 06:56:18 crc kubenswrapper[4823]: I1206 06:56:18.140658 4823 scope.go:117] "RemoveContainer" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" Dec 06 06:56:18 crc kubenswrapper[4823]: E1206 06:56:18.141477 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 06:56:32 crc kubenswrapper[4823]: I1206 06:56:32.141683 4823 scope.go:117] "RemoveContainer" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" Dec 06 06:56:32 crc kubenswrapper[4823]: E1206 06:56:32.142481 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 06:56:47 crc kubenswrapper[4823]: I1206 06:56:47.142294 4823 scope.go:117] "RemoveContainer" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" Dec 06 06:56:48 crc kubenswrapper[4823]: I1206 06:56:48.009293 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerStarted","Data":"e5d2ae4a22e402696798d2d26ede0bf777dfb9593268a6c6415aab7996e8a81d"} Dec 06 06:57:15 crc kubenswrapper[4823]: I1206 06:57:15.055274 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vbx6z"] Dec 06 06:57:15 crc kubenswrapper[4823]: I1206 06:57:15.065941 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vbx6z"] Dec 06 06:57:15 crc kubenswrapper[4823]: I1206 06:57:15.159485 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1bb321f-d664-4c3d-a312-6a57323199c9" path="/var/lib/kubelet/pods/d1bb321f-d664-4c3d-a312-6a57323199c9/volumes" Dec 06 06:57:29 crc kubenswrapper[4823]: I1206 06:57:29.047307 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-26bwc"] Dec 06 06:57:29 crc kubenswrapper[4823]: I1206 06:57:29.057491 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-h7n5l"] Dec 06 06:57:29 crc kubenswrapper[4823]: I1206 06:57:29.068085 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-26bwc"] Dec 06 06:57:29 crc kubenswrapper[4823]: I1206 06:57:29.078046 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-h7n5l"] Dec 06 06:57:29 crc kubenswrapper[4823]: I1206 06:57:29.156354 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2955103b-2cae-4fe0-8ffe-bbca608cad77" path="/var/lib/kubelet/pods/2955103b-2cae-4fe0-8ffe-bbca608cad77/volumes" Dec 06 06:57:29 crc kubenswrapper[4823]: I1206 06:57:29.157205 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaeee530-df36-4fc7-96d5-b93755e8c4fe" path="/var/lib/kubelet/pods/aaeee530-df36-4fc7-96d5-b93755e8c4fe/volumes" Dec 06 06:57:42 crc kubenswrapper[4823]: I1206 06:57:42.039099 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-x7fxv"] Dec 06 06:57:42 crc kubenswrapper[4823]: I1206 06:57:42.051351 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-x7fxv"] Dec 06 06:57:42 crc kubenswrapper[4823]: I1206 06:57:42.098744 4823 scope.go:117] "RemoveContainer" containerID="5fc11ea57edb209c9459412eb4cdb4071f99063c9e79888b9ea3f358bc92217f" Dec 06 06:57:42 crc kubenswrapper[4823]: I1206 06:57:42.139503 4823 scope.go:117] "RemoveContainer" containerID="8a2f5be1440fbacc044e3c6bce78b35114f3e3e6962e6cecb11f639db268e959" Dec 06 06:57:42 crc kubenswrapper[4823]: I1206 06:57:42.206133 4823 scope.go:117] "RemoveContainer" containerID="7b4d311230b0ae04ed859ba4ec9ebfe03e3e8abd52f98c030f298e10c433e534" Dec 06 06:57:42 crc kubenswrapper[4823]: I1206 06:57:42.281870 4823 scope.go:117] "RemoveContainer" containerID="91e469cd349a94e6cc10021d6b5ac2cb85c4c6ad3898b9805bc2ab605dc98406" Dec 06 06:57:42 crc kubenswrapper[4823]: I1206 06:57:42.304627 4823 scope.go:117] "RemoveContainer" containerID="5ea99d5abc9e427ab63f79f7f81597a8a0063395441f32474ab0dceb630285d5" Dec 06 06:57:43 crc kubenswrapper[4823]: I1206 06:57:43.154628 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d04e917-34c8-4df1-bc89-69ca7b7753ac" path="/var/lib/kubelet/pods/3d04e917-34c8-4df1-bc89-69ca7b7753ac/volumes" Dec 06 06:57:48 crc kubenswrapper[4823]: I1206 06:57:48.029166 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-kls2x"] Dec 06 06:57:48 crc kubenswrapper[4823]: I1206 06:57:48.039604 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-kls2x"] Dec 06 06:57:49 crc kubenswrapper[4823]: I1206 06:57:49.152844 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="157d2d95-42a3-4f80-8c1d-b8c27bee49be" path="/var/lib/kubelet/pods/157d2d95-42a3-4f80-8c1d-b8c27bee49be/volumes" Dec 06 06:57:52 crc kubenswrapper[4823]: I1206 06:57:52.660966 4823 generic.go:334] "Generic (PLEG): container finished" podID="7567b412-7ee9-413a-999b-ac4525e10bfa" containerID="519cbfdb2d47122c3b8530d41d968eeaa8c2e1746eca1523772e0d3396fb4710" exitCode=0 Dec 06 06:57:52 crc kubenswrapper[4823]: I1206 06:57:52.661031 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5" event={"ID":"7567b412-7ee9-413a-999b-ac4525e10bfa","Type":"ContainerDied","Data":"519cbfdb2d47122c3b8530d41d968eeaa8c2e1746eca1523772e0d3396fb4710"} Dec 06 06:57:54 crc kubenswrapper[4823]: I1206 06:57:54.086601 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5" Dec 06 06:57:54 crc kubenswrapper[4823]: I1206 06:57:54.225316 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7567b412-7ee9-413a-999b-ac4525e10bfa-inventory\") pod \"7567b412-7ee9-413a-999b-ac4525e10bfa\" (UID: \"7567b412-7ee9-413a-999b-ac4525e10bfa\") " Dec 06 06:57:54 crc kubenswrapper[4823]: I1206 06:57:54.225518 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgxft\" (UniqueName: \"kubernetes.io/projected/7567b412-7ee9-413a-999b-ac4525e10bfa-kube-api-access-fgxft\") pod \"7567b412-7ee9-413a-999b-ac4525e10bfa\" (UID: \"7567b412-7ee9-413a-999b-ac4525e10bfa\") " Dec 06 06:57:54 crc kubenswrapper[4823]: I1206 06:57:54.225625 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7567b412-7ee9-413a-999b-ac4525e10bfa-ssh-key\") pod \"7567b412-7ee9-413a-999b-ac4525e10bfa\" (UID: \"7567b412-7ee9-413a-999b-ac4525e10bfa\") " Dec 06 06:57:54 crc kubenswrapper[4823]: I1206 06:57:54.232914 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7567b412-7ee9-413a-999b-ac4525e10bfa-kube-api-access-fgxft" (OuterVolumeSpecName: "kube-api-access-fgxft") pod "7567b412-7ee9-413a-999b-ac4525e10bfa" (UID: "7567b412-7ee9-413a-999b-ac4525e10bfa"). InnerVolumeSpecName "kube-api-access-fgxft". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:57:54 crc kubenswrapper[4823]: I1206 06:57:54.258351 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7567b412-7ee9-413a-999b-ac4525e10bfa-inventory" (OuterVolumeSpecName: "inventory") pod "7567b412-7ee9-413a-999b-ac4525e10bfa" (UID: "7567b412-7ee9-413a-999b-ac4525e10bfa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:54 crc kubenswrapper[4823]: I1206 06:57:54.274012 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7567b412-7ee9-413a-999b-ac4525e10bfa-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7567b412-7ee9-413a-999b-ac4525e10bfa" (UID: "7567b412-7ee9-413a-999b-ac4525e10bfa"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:57:54 crc kubenswrapper[4823]: I1206 06:57:54.327527 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7567b412-7ee9-413a-999b-ac4525e10bfa-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:54 crc kubenswrapper[4823]: I1206 06:57:54.327858 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgxft\" (UniqueName: \"kubernetes.io/projected/7567b412-7ee9-413a-999b-ac4525e10bfa-kube-api-access-fgxft\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:54 crc kubenswrapper[4823]: I1206 06:57:54.327873 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7567b412-7ee9-413a-999b-ac4525e10bfa-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:57:54 crc kubenswrapper[4823]: I1206 06:57:54.698488 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5" event={"ID":"7567b412-7ee9-413a-999b-ac4525e10bfa","Type":"ContainerDied","Data":"903cf514ae686604ed6f023220e5e34f35c3d086abccd2ec20083287e8f4df67"} Dec 06 06:57:54 crc kubenswrapper[4823]: I1206 06:57:54.698846 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="903cf514ae686604ed6f023220e5e34f35c3d086abccd2ec20083287e8f4df67" Dec 06 06:57:54 crc kubenswrapper[4823]: I1206 06:57:54.699159 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5" Dec 06 06:57:54 crc kubenswrapper[4823]: I1206 06:57:54.781174 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6876q"] Dec 06 06:57:54 crc kubenswrapper[4823]: E1206 06:57:54.781644 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7567b412-7ee9-413a-999b-ac4525e10bfa" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 06 06:57:54 crc kubenswrapper[4823]: I1206 06:57:54.781681 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7567b412-7ee9-413a-999b-ac4525e10bfa" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 06 06:57:54 crc kubenswrapper[4823]: I1206 06:57:54.781946 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7567b412-7ee9-413a-999b-ac4525e10bfa" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 06 06:57:54 crc kubenswrapper[4823]: I1206 06:57:54.782644 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6876q" Dec 06 06:57:54 crc kubenswrapper[4823]: I1206 06:57:54.786743 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 06:57:54 crc kubenswrapper[4823]: I1206 06:57:54.786966 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 06:57:54 crc kubenswrapper[4823]: I1206 06:57:54.787239 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xqh9k" Dec 06 06:57:54 crc kubenswrapper[4823]: I1206 06:57:54.788069 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 06:57:54 crc kubenswrapper[4823]: I1206 06:57:54.811161 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6876q"] Dec 06 06:57:54 crc kubenswrapper[4823]: I1206 06:57:54.941563 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f09400da-5834-4f03-8212-4c4a27edbe13-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6876q\" (UID: \"f09400da-5834-4f03-8212-4c4a27edbe13\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6876q" Dec 06 06:57:54 crc kubenswrapper[4823]: I1206 06:57:54.941679 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f09400da-5834-4f03-8212-4c4a27edbe13-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6876q\" (UID: \"f09400da-5834-4f03-8212-4c4a27edbe13\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6876q" Dec 06 06:57:54 crc kubenswrapper[4823]: I1206 06:57:54.941715 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7llt\" (UniqueName: \"kubernetes.io/projected/f09400da-5834-4f03-8212-4c4a27edbe13-kube-api-access-d7llt\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6876q\" (UID: \"f09400da-5834-4f03-8212-4c4a27edbe13\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6876q" Dec 06 06:57:55 crc kubenswrapper[4823]: I1206 06:57:55.043347 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f09400da-5834-4f03-8212-4c4a27edbe13-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6876q\" (UID: \"f09400da-5834-4f03-8212-4c4a27edbe13\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6876q" Dec 06 06:57:55 crc kubenswrapper[4823]: I1206 06:57:55.043393 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7llt\" (UniqueName: \"kubernetes.io/projected/f09400da-5834-4f03-8212-4c4a27edbe13-kube-api-access-d7llt\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6876q\" (UID: \"f09400da-5834-4f03-8212-4c4a27edbe13\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6876q" Dec 06 06:57:55 crc kubenswrapper[4823]: I1206 06:57:55.043563 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f09400da-5834-4f03-8212-4c4a27edbe13-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6876q\" (UID: \"f09400da-5834-4f03-8212-4c4a27edbe13\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6876q" Dec 06 06:57:55 crc kubenswrapper[4823]: I1206 06:57:55.048459 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f09400da-5834-4f03-8212-4c4a27edbe13-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6876q\" (UID: \"f09400da-5834-4f03-8212-4c4a27edbe13\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6876q" Dec 06 06:57:55 crc kubenswrapper[4823]: I1206 06:57:55.057428 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f09400da-5834-4f03-8212-4c4a27edbe13-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6876q\" (UID: \"f09400da-5834-4f03-8212-4c4a27edbe13\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6876q" Dec 06 06:57:55 crc kubenswrapper[4823]: I1206 06:57:55.071457 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7llt\" (UniqueName: \"kubernetes.io/projected/f09400da-5834-4f03-8212-4c4a27edbe13-kube-api-access-d7llt\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6876q\" (UID: \"f09400da-5834-4f03-8212-4c4a27edbe13\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6876q" Dec 06 06:57:55 crc kubenswrapper[4823]: I1206 06:57:55.117248 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6876q" Dec 06 06:57:55 crc kubenswrapper[4823]: I1206 06:57:55.772037 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6876q"] Dec 06 06:57:55 crc kubenswrapper[4823]: I1206 06:57:55.775814 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 06:57:56 crc kubenswrapper[4823]: I1206 06:57:56.719340 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6876q" event={"ID":"f09400da-5834-4f03-8212-4c4a27edbe13","Type":"ContainerStarted","Data":"e66471019977145788f269f13f1212e7865e4e8e4f2700de988c9ee136a68ba8"} Dec 06 06:57:56 crc kubenswrapper[4823]: I1206 06:57:56.719853 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6876q" event={"ID":"f09400da-5834-4f03-8212-4c4a27edbe13","Type":"ContainerStarted","Data":"9fbfc765881eaf2c4365786bbe23ef52cf36d238841b182f84053688ed202715"} Dec 06 06:57:57 crc kubenswrapper[4823]: I1206 06:57:57.747054 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6876q" podStartSLOduration=3.3044178730000002 podStartE2EDuration="3.747034698s" podCreationTimestamp="2025-12-06 06:57:54 +0000 UTC" firstStartedPulling="2025-12-06 06:57:55.775539856 +0000 UTC m=+1977.061291816" lastFinishedPulling="2025-12-06 06:57:56.218156681 +0000 UTC m=+1977.503908641" observedRunningTime="2025-12-06 06:57:57.742105265 +0000 UTC m=+1979.027857225" watchObservedRunningTime="2025-12-06 06:57:57.747034698 +0000 UTC m=+1979.032786658" Dec 06 06:58:16 crc kubenswrapper[4823]: I1206 06:58:16.045970 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-dwpkn"] Dec 06 06:58:16 crc kubenswrapper[4823]: I1206 06:58:16.055974 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-9fd7r"] Dec 06 06:58:16 crc kubenswrapper[4823]: I1206 06:58:16.064779 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-9fd7r"] Dec 06 06:58:16 crc kubenswrapper[4823]: I1206 06:58:16.073990 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-dwpkn"] Dec 06 06:58:17 crc kubenswrapper[4823]: I1206 06:58:17.154601 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b984559e-efdf-4d21-917f-420506f550da" path="/var/lib/kubelet/pods/b984559e-efdf-4d21-917f-420506f550da/volumes" Dec 06 06:58:17 crc kubenswrapper[4823]: I1206 06:58:17.155627 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5301842-d5df-4df6-8699-56f86789df64" path="/var/lib/kubelet/pods/f5301842-d5df-4df6-8699-56f86789df64/volumes" Dec 06 06:58:23 crc kubenswrapper[4823]: I1206 06:58:23.047039 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-e9c7-account-create-update-mmhk9"] Dec 06 06:58:23 crc kubenswrapper[4823]: I1206 06:58:23.057079 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-abd5-account-create-update-4jjrw"] Dec 06 06:58:23 crc kubenswrapper[4823]: I1206 06:58:23.070324 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-e9c7-account-create-update-mmhk9"] Dec 06 06:58:23 crc kubenswrapper[4823]: I1206 06:58:23.081047 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-abd5-account-create-update-4jjrw"] Dec 06 06:58:23 crc kubenswrapper[4823]: I1206 06:58:23.154609 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="199828c4-e1bd-42a8-b35c-ba26f4c980b8" path="/var/lib/kubelet/pods/199828c4-e1bd-42a8-b35c-ba26f4c980b8/volumes" Dec 06 06:58:23 crc kubenswrapper[4823]: I1206 06:58:23.155440 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3b29187-c6a2-4b88-9215-759fe3cb8dad" path="/var/lib/kubelet/pods/d3b29187-c6a2-4b88-9215-759fe3cb8dad/volumes" Dec 06 06:58:24 crc kubenswrapper[4823]: I1206 06:58:24.028265 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-qdzrg"] Dec 06 06:58:24 crc kubenswrapper[4823]: I1206 06:58:24.039434 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-f788j"] Dec 06 06:58:24 crc kubenswrapper[4823]: I1206 06:58:24.048455 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-f788j"] Dec 06 06:58:24 crc kubenswrapper[4823]: I1206 06:58:24.057129 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-qdzrg"] Dec 06 06:58:25 crc kubenswrapper[4823]: I1206 06:58:25.038721 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-2pbnm"] Dec 06 06:58:25 crc kubenswrapper[4823]: I1206 06:58:25.047878 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-5f48-account-create-update-9428s"] Dec 06 06:58:25 crc kubenswrapper[4823]: I1206 06:58:25.057047 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-2pbnm"] Dec 06 06:58:25 crc kubenswrapper[4823]: I1206 06:58:25.065869 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-5f48-account-create-update-9428s"] Dec 06 06:58:25 crc kubenswrapper[4823]: I1206 06:58:25.152039 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0439c056-347b-4a05-95aa-e85289754ecc" path="/var/lib/kubelet/pods/0439c056-347b-4a05-95aa-e85289754ecc/volumes" Dec 06 06:58:25 crc kubenswrapper[4823]: I1206 06:58:25.153025 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20c6e0b9-0c53-43ea-a471-9076b51f877b" path="/var/lib/kubelet/pods/20c6e0b9-0c53-43ea-a471-9076b51f877b/volumes" Dec 06 06:58:25 crc kubenswrapper[4823]: I1206 06:58:25.153937 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7930a7dd-359c-4d6d-9a66-de8eaa5f6f60" path="/var/lib/kubelet/pods/7930a7dd-359c-4d6d-9a66-de8eaa5f6f60/volumes" Dec 06 06:58:25 crc kubenswrapper[4823]: I1206 06:58:25.154732 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afb8dbce-6f68-4245-971e-e9087ed93cf1" path="/var/lib/kubelet/pods/afb8dbce-6f68-4245-971e-e9087ed93cf1/volumes" Dec 06 06:58:42 crc kubenswrapper[4823]: I1206 06:58:42.494298 4823 scope.go:117] "RemoveContainer" containerID="b46f30ace5e01e63333fc468cf37f7f89802f526709ad97d0690873443e39cb4" Dec 06 06:58:42 crc kubenswrapper[4823]: I1206 06:58:42.523775 4823 scope.go:117] "RemoveContainer" containerID="805375e951eb3e8329cc48a973f91fcc6c76c9f2fd86979d507b8407f0b574f9" Dec 06 06:58:42 crc kubenswrapper[4823]: I1206 06:58:42.579083 4823 scope.go:117] "RemoveContainer" containerID="71f6e30d66b6d046a259a4606ba2845e1ec385954ace5f6911d474e41efa8b25" Dec 06 06:58:42 crc kubenswrapper[4823]: I1206 06:58:42.622472 4823 scope.go:117] "RemoveContainer" containerID="b3a1e8bde4132a6cbd5278eb2ca708e0dc12cd7c9e2443445a95b2e7f0420b28" Dec 06 06:58:42 crc kubenswrapper[4823]: I1206 06:58:42.684040 4823 scope.go:117] "RemoveContainer" containerID="a1941d82aa7c25d9ec68304e1c06c5806c4fb64465e8c386f8e04559e1a97de0" Dec 06 06:58:42 crc kubenswrapper[4823]: I1206 06:58:42.746827 4823 scope.go:117] "RemoveContainer" containerID="45c2548ae54254ed1b411a8df02203fa9d6a360e80300e3ebd0ebb4d1550db82" Dec 06 06:58:42 crc kubenswrapper[4823]: I1206 06:58:42.840450 4823 scope.go:117] "RemoveContainer" containerID="ccbc6492c4baaefac97b9f89624d19954c90cc49ec1420e482bc5e684f82b122" Dec 06 06:58:42 crc kubenswrapper[4823]: I1206 06:58:42.885154 4823 scope.go:117] "RemoveContainer" containerID="72f1a5453be0ee81c3a3de2c9eac46e9620926a078629fc9420781a5d9a30395" Dec 06 06:58:42 crc kubenswrapper[4823]: I1206 06:58:42.914677 4823 scope.go:117] "RemoveContainer" containerID="6fe2554afef990e2feeeb72cb985a1e77b66748179faa0dd0685ef6860aa0271" Dec 06 06:58:42 crc kubenswrapper[4823]: I1206 06:58:42.941820 4823 scope.go:117] "RemoveContainer" containerID="402d507b0a3393646cdc9b117e0bfb305e3b278e50d00ef1db371f76043ae9e7" Dec 06 06:59:06 crc kubenswrapper[4823]: I1206 06:59:06.052104 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:59:06 crc kubenswrapper[4823]: I1206 06:59:06.052784 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:59:17 crc kubenswrapper[4823]: I1206 06:59:17.495291 4823 generic.go:334] "Generic (PLEG): container finished" podID="f09400da-5834-4f03-8212-4c4a27edbe13" containerID="e66471019977145788f269f13f1212e7865e4e8e4f2700de988c9ee136a68ba8" exitCode=0 Dec 06 06:59:17 crc kubenswrapper[4823]: I1206 06:59:17.495359 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6876q" event={"ID":"f09400da-5834-4f03-8212-4c4a27edbe13","Type":"ContainerDied","Data":"e66471019977145788f269f13f1212e7865e4e8e4f2700de988c9ee136a68ba8"} Dec 06 06:59:18 crc kubenswrapper[4823]: I1206 06:59:18.929949 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6876q" Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.066298 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7llt\" (UniqueName: \"kubernetes.io/projected/f09400da-5834-4f03-8212-4c4a27edbe13-kube-api-access-d7llt\") pod \"f09400da-5834-4f03-8212-4c4a27edbe13\" (UID: \"f09400da-5834-4f03-8212-4c4a27edbe13\") " Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.066355 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f09400da-5834-4f03-8212-4c4a27edbe13-inventory\") pod \"f09400da-5834-4f03-8212-4c4a27edbe13\" (UID: \"f09400da-5834-4f03-8212-4c4a27edbe13\") " Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.066386 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f09400da-5834-4f03-8212-4c4a27edbe13-ssh-key\") pod \"f09400da-5834-4f03-8212-4c4a27edbe13\" (UID: \"f09400da-5834-4f03-8212-4c4a27edbe13\") " Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.072021 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f09400da-5834-4f03-8212-4c4a27edbe13-kube-api-access-d7llt" (OuterVolumeSpecName: "kube-api-access-d7llt") pod "f09400da-5834-4f03-8212-4c4a27edbe13" (UID: "f09400da-5834-4f03-8212-4c4a27edbe13"). InnerVolumeSpecName "kube-api-access-d7llt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:59:19 crc kubenswrapper[4823]: E1206 06:59:19.091726 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f09400da-5834-4f03-8212-4c4a27edbe13-ssh-key podName:f09400da-5834-4f03-8212-4c4a27edbe13 nodeName:}" failed. No retries permitted until 2025-12-06 06:59:19.591688939 +0000 UTC m=+2060.877440899 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key" (UniqueName: "kubernetes.io/secret/f09400da-5834-4f03-8212-4c4a27edbe13-ssh-key") pod "f09400da-5834-4f03-8212-4c4a27edbe13" (UID: "f09400da-5834-4f03-8212-4c4a27edbe13") : error deleting /var/lib/kubelet/pods/f09400da-5834-4f03-8212-4c4a27edbe13/volume-subpaths: remove /var/lib/kubelet/pods/f09400da-5834-4f03-8212-4c4a27edbe13/volume-subpaths: no such file or directory Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.095042 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f09400da-5834-4f03-8212-4c4a27edbe13-inventory" (OuterVolumeSpecName: "inventory") pod "f09400da-5834-4f03-8212-4c4a27edbe13" (UID: "f09400da-5834-4f03-8212-4c4a27edbe13"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.168211 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7llt\" (UniqueName: \"kubernetes.io/projected/f09400da-5834-4f03-8212-4c4a27edbe13-kube-api-access-d7llt\") on node \"crc\" DevicePath \"\"" Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.168239 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f09400da-5834-4f03-8212-4c4a27edbe13-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.541892 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6876q" event={"ID":"f09400da-5834-4f03-8212-4c4a27edbe13","Type":"ContainerDied","Data":"9fbfc765881eaf2c4365786bbe23ef52cf36d238841b182f84053688ed202715"} Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.541950 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fbfc765881eaf2c4365786bbe23ef52cf36d238841b182f84053688ed202715" Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.542029 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6876q" Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.602295 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffbch"] Dec 06 06:59:19 crc kubenswrapper[4823]: E1206 06:59:19.602876 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09400da-5834-4f03-8212-4c4a27edbe13" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.602897 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09400da-5834-4f03-8212-4c4a27edbe13" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.603147 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f09400da-5834-4f03-8212-4c4a27edbe13" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.603917 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffbch" Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.615151 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffbch"] Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.678170 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f09400da-5834-4f03-8212-4c4a27edbe13-ssh-key\") pod \"f09400da-5834-4f03-8212-4c4a27edbe13\" (UID: \"f09400da-5834-4f03-8212-4c4a27edbe13\") " Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.682065 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f09400da-5834-4f03-8212-4c4a27edbe13-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f09400da-5834-4f03-8212-4c4a27edbe13" (UID: "f09400da-5834-4f03-8212-4c4a27edbe13"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.781402 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eed68a6c-a7de-40de-8617-34b66781ec31-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ffbch\" (UID: \"eed68a6c-a7de-40de-8617-34b66781ec31\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffbch" Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.781849 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf4hh\" (UniqueName: \"kubernetes.io/projected/eed68a6c-a7de-40de-8617-34b66781ec31-kube-api-access-jf4hh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ffbch\" (UID: \"eed68a6c-a7de-40de-8617-34b66781ec31\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffbch" Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.782771 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eed68a6c-a7de-40de-8617-34b66781ec31-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ffbch\" (UID: \"eed68a6c-a7de-40de-8617-34b66781ec31\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffbch" Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.782947 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f09400da-5834-4f03-8212-4c4a27edbe13-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.884529 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf4hh\" (UniqueName: \"kubernetes.io/projected/eed68a6c-a7de-40de-8617-34b66781ec31-kube-api-access-jf4hh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ffbch\" (UID: \"eed68a6c-a7de-40de-8617-34b66781ec31\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffbch" Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.884623 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eed68a6c-a7de-40de-8617-34b66781ec31-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ffbch\" (UID: \"eed68a6c-a7de-40de-8617-34b66781ec31\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffbch" Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.884689 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eed68a6c-a7de-40de-8617-34b66781ec31-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ffbch\" (UID: \"eed68a6c-a7de-40de-8617-34b66781ec31\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffbch" Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.894539 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eed68a6c-a7de-40de-8617-34b66781ec31-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ffbch\" (UID: \"eed68a6c-a7de-40de-8617-34b66781ec31\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffbch" Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.897531 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eed68a6c-a7de-40de-8617-34b66781ec31-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ffbch\" (UID: \"eed68a6c-a7de-40de-8617-34b66781ec31\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffbch" Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.903816 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf4hh\" (UniqueName: \"kubernetes.io/projected/eed68a6c-a7de-40de-8617-34b66781ec31-kube-api-access-jf4hh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ffbch\" (UID: \"eed68a6c-a7de-40de-8617-34b66781ec31\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffbch" Dec 06 06:59:19 crc kubenswrapper[4823]: I1206 06:59:19.925892 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffbch" Dec 06 06:59:20 crc kubenswrapper[4823]: I1206 06:59:20.451271 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffbch"] Dec 06 06:59:20 crc kubenswrapper[4823]: I1206 06:59:20.557590 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffbch" event={"ID":"eed68a6c-a7de-40de-8617-34b66781ec31","Type":"ContainerStarted","Data":"f6507982b50450f77cfb9569423c1feb4fd14a43896ac06d459a99b82513961b"} Dec 06 06:59:21 crc kubenswrapper[4823]: I1206 06:59:21.567797 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffbch" event={"ID":"eed68a6c-a7de-40de-8617-34b66781ec31","Type":"ContainerStarted","Data":"5e85cad300c49d9de6706e4617d79e91d365f916330d493b7a62076e4314cc57"} Dec 06 06:59:21 crc kubenswrapper[4823]: I1206 06:59:21.609775 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffbch" podStartSLOduration=2.005602964 podStartE2EDuration="2.609746356s" podCreationTimestamp="2025-12-06 06:59:19 +0000 UTC" firstStartedPulling="2025-12-06 06:59:20.456442211 +0000 UTC m=+2061.742194171" lastFinishedPulling="2025-12-06 06:59:21.060585603 +0000 UTC m=+2062.346337563" observedRunningTime="2025-12-06 06:59:21.592372271 +0000 UTC m=+2062.878124231" watchObservedRunningTime="2025-12-06 06:59:21.609746356 +0000 UTC m=+2062.895498316" Dec 06 06:59:24 crc kubenswrapper[4823]: I1206 06:59:24.044565 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6mg4v"] Dec 06 06:59:24 crc kubenswrapper[4823]: I1206 06:59:24.055151 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6mg4v"] Dec 06 06:59:25 crc kubenswrapper[4823]: I1206 06:59:25.152423 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2d768b1-0912-4fb7-8bc8-408233b3af09" path="/var/lib/kubelet/pods/e2d768b1-0912-4fb7-8bc8-408233b3af09/volumes" Dec 06 06:59:27 crc kubenswrapper[4823]: I1206 06:59:27.624089 4823 generic.go:334] "Generic (PLEG): container finished" podID="eed68a6c-a7de-40de-8617-34b66781ec31" containerID="5e85cad300c49d9de6706e4617d79e91d365f916330d493b7a62076e4314cc57" exitCode=0 Dec 06 06:59:27 crc kubenswrapper[4823]: I1206 06:59:27.624171 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffbch" event={"ID":"eed68a6c-a7de-40de-8617-34b66781ec31","Type":"ContainerDied","Data":"5e85cad300c49d9de6706e4617d79e91d365f916330d493b7a62076e4314cc57"} Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.167297 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffbch" Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.267750 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eed68a6c-a7de-40de-8617-34b66781ec31-inventory\") pod \"eed68a6c-a7de-40de-8617-34b66781ec31\" (UID: \"eed68a6c-a7de-40de-8617-34b66781ec31\") " Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.267861 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eed68a6c-a7de-40de-8617-34b66781ec31-ssh-key\") pod \"eed68a6c-a7de-40de-8617-34b66781ec31\" (UID: \"eed68a6c-a7de-40de-8617-34b66781ec31\") " Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.267898 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jf4hh\" (UniqueName: \"kubernetes.io/projected/eed68a6c-a7de-40de-8617-34b66781ec31-kube-api-access-jf4hh\") pod \"eed68a6c-a7de-40de-8617-34b66781ec31\" (UID: \"eed68a6c-a7de-40de-8617-34b66781ec31\") " Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.278955 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eed68a6c-a7de-40de-8617-34b66781ec31-kube-api-access-jf4hh" (OuterVolumeSpecName: "kube-api-access-jf4hh") pod "eed68a6c-a7de-40de-8617-34b66781ec31" (UID: "eed68a6c-a7de-40de-8617-34b66781ec31"). InnerVolumeSpecName "kube-api-access-jf4hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.298694 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eed68a6c-a7de-40de-8617-34b66781ec31-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "eed68a6c-a7de-40de-8617-34b66781ec31" (UID: "eed68a6c-a7de-40de-8617-34b66781ec31"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.305830 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eed68a6c-a7de-40de-8617-34b66781ec31-inventory" (OuterVolumeSpecName: "inventory") pod "eed68a6c-a7de-40de-8617-34b66781ec31" (UID: "eed68a6c-a7de-40de-8617-34b66781ec31"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.372445 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eed68a6c-a7de-40de-8617-34b66781ec31-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.372651 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eed68a6c-a7de-40de-8617-34b66781ec31-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.372793 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jf4hh\" (UniqueName: \"kubernetes.io/projected/eed68a6c-a7de-40de-8617-34b66781ec31-kube-api-access-jf4hh\") on node \"crc\" DevicePath \"\"" Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.642401 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffbch" event={"ID":"eed68a6c-a7de-40de-8617-34b66781ec31","Type":"ContainerDied","Data":"f6507982b50450f77cfb9569423c1feb4fd14a43896ac06d459a99b82513961b"} Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.642836 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6507982b50450f77cfb9569423c1feb4fd14a43896ac06d459a99b82513961b" Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.642519 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffbch" Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.732071 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lhwv8"] Dec 06 06:59:29 crc kubenswrapper[4823]: E1206 06:59:29.732873 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eed68a6c-a7de-40de-8617-34b66781ec31" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.732957 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="eed68a6c-a7de-40de-8617-34b66781ec31" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.733270 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="eed68a6c-a7de-40de-8617-34b66781ec31" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.734091 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lhwv8" Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.736947 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.737090 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.737250 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.743467 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xqh9k" Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.753678 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lhwv8"] Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.883524 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49f9da65-c637-468d-b0e6-7e8f3a9c6a6a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lhwv8\" (UID: \"49f9da65-c637-468d-b0e6-7e8f3a9c6a6a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lhwv8" Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.883590 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnrng\" (UniqueName: \"kubernetes.io/projected/49f9da65-c637-468d-b0e6-7e8f3a9c6a6a-kube-api-access-pnrng\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lhwv8\" (UID: \"49f9da65-c637-468d-b0e6-7e8f3a9c6a6a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lhwv8" Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.883621 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49f9da65-c637-468d-b0e6-7e8f3a9c6a6a-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lhwv8\" (UID: \"49f9da65-c637-468d-b0e6-7e8f3a9c6a6a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lhwv8" Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.985543 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49f9da65-c637-468d-b0e6-7e8f3a9c6a6a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lhwv8\" (UID: \"49f9da65-c637-468d-b0e6-7e8f3a9c6a6a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lhwv8" Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.985636 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnrng\" (UniqueName: \"kubernetes.io/projected/49f9da65-c637-468d-b0e6-7e8f3a9c6a6a-kube-api-access-pnrng\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lhwv8\" (UID: \"49f9da65-c637-468d-b0e6-7e8f3a9c6a6a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lhwv8" Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.985691 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49f9da65-c637-468d-b0e6-7e8f3a9c6a6a-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lhwv8\" (UID: \"49f9da65-c637-468d-b0e6-7e8f3a9c6a6a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lhwv8" Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.990932 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49f9da65-c637-468d-b0e6-7e8f3a9c6a6a-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lhwv8\" (UID: \"49f9da65-c637-468d-b0e6-7e8f3a9c6a6a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lhwv8" Dec 06 06:59:29 crc kubenswrapper[4823]: I1206 06:59:29.992548 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49f9da65-c637-468d-b0e6-7e8f3a9c6a6a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lhwv8\" (UID: \"49f9da65-c637-468d-b0e6-7e8f3a9c6a6a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lhwv8" Dec 06 06:59:30 crc kubenswrapper[4823]: I1206 06:59:30.003262 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnrng\" (UniqueName: \"kubernetes.io/projected/49f9da65-c637-468d-b0e6-7e8f3a9c6a6a-kube-api-access-pnrng\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lhwv8\" (UID: \"49f9da65-c637-468d-b0e6-7e8f3a9c6a6a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lhwv8" Dec 06 06:59:30 crc kubenswrapper[4823]: I1206 06:59:30.053059 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lhwv8" Dec 06 06:59:30 crc kubenswrapper[4823]: I1206 06:59:30.622162 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lhwv8"] Dec 06 06:59:30 crc kubenswrapper[4823]: W1206 06:59:30.627038 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49f9da65_c637_468d_b0e6_7e8f3a9c6a6a.slice/crio-2243f2b4b74a9911faa2b27134067d6ef0e36e03f120c44f59cc6deba42a9555 WatchSource:0}: Error finding container 2243f2b4b74a9911faa2b27134067d6ef0e36e03f120c44f59cc6deba42a9555: Status 404 returned error can't find the container with id 2243f2b4b74a9911faa2b27134067d6ef0e36e03f120c44f59cc6deba42a9555 Dec 06 06:59:30 crc kubenswrapper[4823]: I1206 06:59:30.655847 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lhwv8" event={"ID":"49f9da65-c637-468d-b0e6-7e8f3a9c6a6a","Type":"ContainerStarted","Data":"2243f2b4b74a9911faa2b27134067d6ef0e36e03f120c44f59cc6deba42a9555"} Dec 06 06:59:31 crc kubenswrapper[4823]: I1206 06:59:31.031977 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hmq6h"] Dec 06 06:59:31 crc kubenswrapper[4823]: I1206 06:59:31.036210 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmq6h" Dec 06 06:59:31 crc kubenswrapper[4823]: I1206 06:59:31.043907 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hmq6h"] Dec 06 06:59:31 crc kubenswrapper[4823]: I1206 06:59:31.222880 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d03393c7-71a4-4432-b561-de67ed94fc94-catalog-content\") pod \"redhat-operators-hmq6h\" (UID: \"d03393c7-71a4-4432-b561-de67ed94fc94\") " pod="openshift-marketplace/redhat-operators-hmq6h" Dec 06 06:59:31 crc kubenswrapper[4823]: I1206 06:59:31.223233 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kldj7\" (UniqueName: \"kubernetes.io/projected/d03393c7-71a4-4432-b561-de67ed94fc94-kube-api-access-kldj7\") pod \"redhat-operators-hmq6h\" (UID: \"d03393c7-71a4-4432-b561-de67ed94fc94\") " pod="openshift-marketplace/redhat-operators-hmq6h" Dec 06 06:59:31 crc kubenswrapper[4823]: I1206 06:59:31.223339 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d03393c7-71a4-4432-b561-de67ed94fc94-utilities\") pod \"redhat-operators-hmq6h\" (UID: \"d03393c7-71a4-4432-b561-de67ed94fc94\") " pod="openshift-marketplace/redhat-operators-hmq6h" Dec 06 06:59:31 crc kubenswrapper[4823]: I1206 06:59:31.325376 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d03393c7-71a4-4432-b561-de67ed94fc94-utilities\") pod \"redhat-operators-hmq6h\" (UID: \"d03393c7-71a4-4432-b561-de67ed94fc94\") " pod="openshift-marketplace/redhat-operators-hmq6h" Dec 06 06:59:31 crc kubenswrapper[4823]: I1206 06:59:31.325747 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d03393c7-71a4-4432-b561-de67ed94fc94-catalog-content\") pod \"redhat-operators-hmq6h\" (UID: \"d03393c7-71a4-4432-b561-de67ed94fc94\") " pod="openshift-marketplace/redhat-operators-hmq6h" Dec 06 06:59:31 crc kubenswrapper[4823]: I1206 06:59:31.325873 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kldj7\" (UniqueName: \"kubernetes.io/projected/d03393c7-71a4-4432-b561-de67ed94fc94-kube-api-access-kldj7\") pod \"redhat-operators-hmq6h\" (UID: \"d03393c7-71a4-4432-b561-de67ed94fc94\") " pod="openshift-marketplace/redhat-operators-hmq6h" Dec 06 06:59:31 crc kubenswrapper[4823]: I1206 06:59:31.325966 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d03393c7-71a4-4432-b561-de67ed94fc94-utilities\") pod \"redhat-operators-hmq6h\" (UID: \"d03393c7-71a4-4432-b561-de67ed94fc94\") " pod="openshift-marketplace/redhat-operators-hmq6h" Dec 06 06:59:31 crc kubenswrapper[4823]: I1206 06:59:31.326224 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d03393c7-71a4-4432-b561-de67ed94fc94-catalog-content\") pod \"redhat-operators-hmq6h\" (UID: \"d03393c7-71a4-4432-b561-de67ed94fc94\") " pod="openshift-marketplace/redhat-operators-hmq6h" Dec 06 06:59:31 crc kubenswrapper[4823]: I1206 06:59:31.346866 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kldj7\" (UniqueName: \"kubernetes.io/projected/d03393c7-71a4-4432-b561-de67ed94fc94-kube-api-access-kldj7\") pod \"redhat-operators-hmq6h\" (UID: \"d03393c7-71a4-4432-b561-de67ed94fc94\") " pod="openshift-marketplace/redhat-operators-hmq6h" Dec 06 06:59:31 crc kubenswrapper[4823]: I1206 06:59:31.405518 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmq6h" Dec 06 06:59:31 crc kubenswrapper[4823]: I1206 06:59:31.666594 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lhwv8" event={"ID":"49f9da65-c637-468d-b0e6-7e8f3a9c6a6a","Type":"ContainerStarted","Data":"98ac4b7553c6c779053d58d8b38c2c25220e56d16ff124bb2ce67c6eadff23fd"} Dec 06 06:59:31 crc kubenswrapper[4823]: I1206 06:59:31.691510 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lhwv8" podStartSLOduration=2.212419965 podStartE2EDuration="2.691487741s" podCreationTimestamp="2025-12-06 06:59:29 +0000 UTC" firstStartedPulling="2025-12-06 06:59:30.630012515 +0000 UTC m=+2071.915764465" lastFinishedPulling="2025-12-06 06:59:31.109080281 +0000 UTC m=+2072.394832241" observedRunningTime="2025-12-06 06:59:31.685234999 +0000 UTC m=+2072.970986969" watchObservedRunningTime="2025-12-06 06:59:31.691487741 +0000 UTC m=+2072.977239701" Dec 06 06:59:31 crc kubenswrapper[4823]: I1206 06:59:31.894405 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hmq6h"] Dec 06 06:59:32 crc kubenswrapper[4823]: I1206 06:59:32.725131 4823 generic.go:334] "Generic (PLEG): container finished" podID="d03393c7-71a4-4432-b561-de67ed94fc94" containerID="a5e12720a12e6dd9d4dd03512834eb5691bf01665cc87994048f6e35caaeb377" exitCode=0 Dec 06 06:59:32 crc kubenswrapper[4823]: I1206 06:59:32.726578 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmq6h" event={"ID":"d03393c7-71a4-4432-b561-de67ed94fc94","Type":"ContainerDied","Data":"a5e12720a12e6dd9d4dd03512834eb5691bf01665cc87994048f6e35caaeb377"} Dec 06 06:59:32 crc kubenswrapper[4823]: I1206 06:59:32.726817 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmq6h" event={"ID":"d03393c7-71a4-4432-b561-de67ed94fc94","Type":"ContainerStarted","Data":"17923df89d88d2660d5feabbac35090c57a0670ab7115e9587f89f156c4e12f6"} Dec 06 06:59:33 crc kubenswrapper[4823]: I1206 06:59:33.737306 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmq6h" event={"ID":"d03393c7-71a4-4432-b561-de67ed94fc94","Type":"ContainerStarted","Data":"19cfeb8ce4f35bf424712be9d36154c512ed572a7311363dd7037e84fafc59b4"} Dec 06 06:59:36 crc kubenswrapper[4823]: I1206 06:59:36.052019 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:59:36 crc kubenswrapper[4823]: I1206 06:59:36.052395 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:59:37 crc kubenswrapper[4823]: I1206 06:59:37.777959 4823 generic.go:334] "Generic (PLEG): container finished" podID="d03393c7-71a4-4432-b561-de67ed94fc94" containerID="19cfeb8ce4f35bf424712be9d36154c512ed572a7311363dd7037e84fafc59b4" exitCode=0 Dec 06 06:59:37 crc kubenswrapper[4823]: I1206 06:59:37.778036 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmq6h" event={"ID":"d03393c7-71a4-4432-b561-de67ed94fc94","Type":"ContainerDied","Data":"19cfeb8ce4f35bf424712be9d36154c512ed572a7311363dd7037e84fafc59b4"} Dec 06 06:59:38 crc kubenswrapper[4823]: I1206 06:59:38.791899 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmq6h" event={"ID":"d03393c7-71a4-4432-b561-de67ed94fc94","Type":"ContainerStarted","Data":"94819b16d3b61a47c1ea86c962707d2f2b3c1abd071af80a405d7bacfd1adf62"} Dec 06 06:59:39 crc kubenswrapper[4823]: I1206 06:59:39.847027 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hmq6h" podStartSLOduration=3.089958512 podStartE2EDuration="8.846999603s" podCreationTimestamp="2025-12-06 06:59:31 +0000 UTC" firstStartedPulling="2025-12-06 06:59:32.731882134 +0000 UTC m=+2074.017634094" lastFinishedPulling="2025-12-06 06:59:38.488923225 +0000 UTC m=+2079.774675185" observedRunningTime="2025-12-06 06:59:39.838629039 +0000 UTC m=+2081.124381009" watchObservedRunningTime="2025-12-06 06:59:39.846999603 +0000 UTC m=+2081.132751573" Dec 06 06:59:41 crc kubenswrapper[4823]: I1206 06:59:41.406646 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hmq6h" Dec 06 06:59:41 crc kubenswrapper[4823]: I1206 06:59:41.406756 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hmq6h" Dec 06 06:59:42 crc kubenswrapper[4823]: I1206 06:59:42.453459 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hmq6h" podUID="d03393c7-71a4-4432-b561-de67ed94fc94" containerName="registry-server" probeResult="failure" output=< Dec 06 06:59:42 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Dec 06 06:59:42 crc kubenswrapper[4823]: > Dec 06 06:59:43 crc kubenswrapper[4823]: I1206 06:59:43.151267 4823 scope.go:117] "RemoveContainer" containerID="c895c98f6cf98e6a81debec5e4a9e88b0cdccf1126182e0c068a647d8dc21d28" Dec 06 06:59:46 crc kubenswrapper[4823]: I1206 06:59:46.068976 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-8mx8j"] Dec 06 06:59:46 crc kubenswrapper[4823]: I1206 06:59:46.091774 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-8mx8j"] Dec 06 06:59:47 crc kubenswrapper[4823]: I1206 06:59:47.215865 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c" path="/var/lib/kubelet/pods/36b6b1c1-b316-4c9e-b1cb-cb1bd622c64c/volumes" Dec 06 06:59:51 crc kubenswrapper[4823]: I1206 06:59:51.035418 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kgd2q"] Dec 06 06:59:51 crc kubenswrapper[4823]: I1206 06:59:51.052036 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kgd2q"] Dec 06 06:59:51 crc kubenswrapper[4823]: I1206 06:59:51.153459 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53ac01ab-ea2b-4b2c-9e2e-dab4612351d5" path="/var/lib/kubelet/pods/53ac01ab-ea2b-4b2c-9e2e-dab4612351d5/volumes" Dec 06 06:59:51 crc kubenswrapper[4823]: I1206 06:59:51.458743 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hmq6h" Dec 06 06:59:51 crc kubenswrapper[4823]: I1206 06:59:51.511727 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hmq6h" Dec 06 06:59:51 crc kubenswrapper[4823]: I1206 06:59:51.702200 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hmq6h"] Dec 06 06:59:53 crc kubenswrapper[4823]: I1206 06:59:53.168914 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hmq6h" podUID="d03393c7-71a4-4432-b561-de67ed94fc94" containerName="registry-server" containerID="cri-o://94819b16d3b61a47c1ea86c962707d2f2b3c1abd071af80a405d7bacfd1adf62" gracePeriod=2 Dec 06 06:59:54 crc kubenswrapper[4823]: I1206 06:59:54.184526 4823 generic.go:334] "Generic (PLEG): container finished" podID="d03393c7-71a4-4432-b561-de67ed94fc94" containerID="94819b16d3b61a47c1ea86c962707d2f2b3c1abd071af80a405d7bacfd1adf62" exitCode=0 Dec 06 06:59:54 crc kubenswrapper[4823]: I1206 06:59:54.184733 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmq6h" event={"ID":"d03393c7-71a4-4432-b561-de67ed94fc94","Type":"ContainerDied","Data":"94819b16d3b61a47c1ea86c962707d2f2b3c1abd071af80a405d7bacfd1adf62"} Dec 06 06:59:54 crc kubenswrapper[4823]: I1206 06:59:54.184922 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmq6h" event={"ID":"d03393c7-71a4-4432-b561-de67ed94fc94","Type":"ContainerDied","Data":"17923df89d88d2660d5feabbac35090c57a0670ab7115e9587f89f156c4e12f6"} Dec 06 06:59:54 crc kubenswrapper[4823]: I1206 06:59:54.184950 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17923df89d88d2660d5feabbac35090c57a0670ab7115e9587f89f156c4e12f6" Dec 06 06:59:54 crc kubenswrapper[4823]: I1206 06:59:54.261618 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmq6h" Dec 06 06:59:54 crc kubenswrapper[4823]: I1206 06:59:54.283197 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d03393c7-71a4-4432-b561-de67ed94fc94-utilities\") pod \"d03393c7-71a4-4432-b561-de67ed94fc94\" (UID: \"d03393c7-71a4-4432-b561-de67ed94fc94\") " Dec 06 06:59:54 crc kubenswrapper[4823]: I1206 06:59:54.283319 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kldj7\" (UniqueName: \"kubernetes.io/projected/d03393c7-71a4-4432-b561-de67ed94fc94-kube-api-access-kldj7\") pod \"d03393c7-71a4-4432-b561-de67ed94fc94\" (UID: \"d03393c7-71a4-4432-b561-de67ed94fc94\") " Dec 06 06:59:54 crc kubenswrapper[4823]: I1206 06:59:54.283591 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d03393c7-71a4-4432-b561-de67ed94fc94-catalog-content\") pod \"d03393c7-71a4-4432-b561-de67ed94fc94\" (UID: \"d03393c7-71a4-4432-b561-de67ed94fc94\") " Dec 06 06:59:54 crc kubenswrapper[4823]: I1206 06:59:54.284057 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d03393c7-71a4-4432-b561-de67ed94fc94-utilities" (OuterVolumeSpecName: "utilities") pod "d03393c7-71a4-4432-b561-de67ed94fc94" (UID: "d03393c7-71a4-4432-b561-de67ed94fc94"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:59:54 crc kubenswrapper[4823]: I1206 06:59:54.289598 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d03393c7-71a4-4432-b561-de67ed94fc94-kube-api-access-kldj7" (OuterVolumeSpecName: "kube-api-access-kldj7") pod "d03393c7-71a4-4432-b561-de67ed94fc94" (UID: "d03393c7-71a4-4432-b561-de67ed94fc94"). InnerVolumeSpecName "kube-api-access-kldj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:59:54 crc kubenswrapper[4823]: I1206 06:59:54.296854 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d03393c7-71a4-4432-b561-de67ed94fc94-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:59:54 crc kubenswrapper[4823]: I1206 06:59:54.296902 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kldj7\" (UniqueName: \"kubernetes.io/projected/d03393c7-71a4-4432-b561-de67ed94fc94-kube-api-access-kldj7\") on node \"crc\" DevicePath \"\"" Dec 06 06:59:54 crc kubenswrapper[4823]: I1206 06:59:54.433368 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d03393c7-71a4-4432-b561-de67ed94fc94-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d03393c7-71a4-4432-b561-de67ed94fc94" (UID: "d03393c7-71a4-4432-b561-de67ed94fc94"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:59:54 crc kubenswrapper[4823]: I1206 06:59:54.502265 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d03393c7-71a4-4432-b561-de67ed94fc94-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:59:55 crc kubenswrapper[4823]: I1206 06:59:55.207949 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmq6h" Dec 06 06:59:55 crc kubenswrapper[4823]: I1206 06:59:55.255725 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hmq6h"] Dec 06 06:59:55 crc kubenswrapper[4823]: I1206 06:59:55.273959 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hmq6h"] Dec 06 06:59:57 crc kubenswrapper[4823]: I1206 06:59:57.153365 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d03393c7-71a4-4432-b561-de67ed94fc94" path="/var/lib/kubelet/pods/d03393c7-71a4-4432-b561-de67ed94fc94/volumes" Dec 06 07:00:00 crc kubenswrapper[4823]: I1206 07:00:00.151582 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416740-jwp69"] Dec 06 07:00:00 crc kubenswrapper[4823]: E1206 07:00:00.152406 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d03393c7-71a4-4432-b561-de67ed94fc94" containerName="registry-server" Dec 06 07:00:00 crc kubenswrapper[4823]: I1206 07:00:00.152426 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d03393c7-71a4-4432-b561-de67ed94fc94" containerName="registry-server" Dec 06 07:00:00 crc kubenswrapper[4823]: E1206 07:00:00.152481 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d03393c7-71a4-4432-b561-de67ed94fc94" containerName="extract-utilities" Dec 06 07:00:00 crc kubenswrapper[4823]: I1206 07:00:00.152492 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d03393c7-71a4-4432-b561-de67ed94fc94" containerName="extract-utilities" Dec 06 07:00:00 crc kubenswrapper[4823]: E1206 07:00:00.152518 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d03393c7-71a4-4432-b561-de67ed94fc94" containerName="extract-content" Dec 06 07:00:00 crc kubenswrapper[4823]: I1206 07:00:00.152526 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d03393c7-71a4-4432-b561-de67ed94fc94" containerName="extract-content" Dec 06 07:00:00 crc kubenswrapper[4823]: I1206 07:00:00.152815 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d03393c7-71a4-4432-b561-de67ed94fc94" containerName="registry-server" Dec 06 07:00:00 crc kubenswrapper[4823]: I1206 07:00:00.153896 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-jwp69" Dec 06 07:00:00 crc kubenswrapper[4823]: I1206 07:00:00.157341 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 07:00:00 crc kubenswrapper[4823]: I1206 07:00:00.158624 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 07:00:00 crc kubenswrapper[4823]: I1206 07:00:00.167118 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416740-jwp69"] Dec 06 07:00:00 crc kubenswrapper[4823]: I1206 07:00:00.221937 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/09750040-6c82-466f-8cea-f040fb6ffb34-secret-volume\") pod \"collect-profiles-29416740-jwp69\" (UID: \"09750040-6c82-466f-8cea-f040fb6ffb34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-jwp69" Dec 06 07:00:00 crc kubenswrapper[4823]: I1206 07:00:00.222107 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5swp2\" (UniqueName: \"kubernetes.io/projected/09750040-6c82-466f-8cea-f040fb6ffb34-kube-api-access-5swp2\") pod \"collect-profiles-29416740-jwp69\" (UID: \"09750040-6c82-466f-8cea-f040fb6ffb34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-jwp69" Dec 06 07:00:00 crc kubenswrapper[4823]: I1206 07:00:00.222128 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09750040-6c82-466f-8cea-f040fb6ffb34-config-volume\") pod \"collect-profiles-29416740-jwp69\" (UID: \"09750040-6c82-466f-8cea-f040fb6ffb34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-jwp69" Dec 06 07:00:00 crc kubenswrapper[4823]: I1206 07:00:00.325303 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5swp2\" (UniqueName: \"kubernetes.io/projected/09750040-6c82-466f-8cea-f040fb6ffb34-kube-api-access-5swp2\") pod \"collect-profiles-29416740-jwp69\" (UID: \"09750040-6c82-466f-8cea-f040fb6ffb34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-jwp69" Dec 06 07:00:00 crc kubenswrapper[4823]: I1206 07:00:00.325616 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09750040-6c82-466f-8cea-f040fb6ffb34-config-volume\") pod \"collect-profiles-29416740-jwp69\" (UID: \"09750040-6c82-466f-8cea-f040fb6ffb34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-jwp69" Dec 06 07:00:00 crc kubenswrapper[4823]: I1206 07:00:00.325822 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/09750040-6c82-466f-8cea-f040fb6ffb34-secret-volume\") pod \"collect-profiles-29416740-jwp69\" (UID: \"09750040-6c82-466f-8cea-f040fb6ffb34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-jwp69" Dec 06 07:00:00 crc kubenswrapper[4823]: I1206 07:00:00.326602 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09750040-6c82-466f-8cea-f040fb6ffb34-config-volume\") pod \"collect-profiles-29416740-jwp69\" (UID: \"09750040-6c82-466f-8cea-f040fb6ffb34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-jwp69" Dec 06 07:00:00 crc kubenswrapper[4823]: I1206 07:00:00.336800 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/09750040-6c82-466f-8cea-f040fb6ffb34-secret-volume\") pod \"collect-profiles-29416740-jwp69\" (UID: \"09750040-6c82-466f-8cea-f040fb6ffb34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-jwp69" Dec 06 07:00:00 crc kubenswrapper[4823]: I1206 07:00:00.342054 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5swp2\" (UniqueName: \"kubernetes.io/projected/09750040-6c82-466f-8cea-f040fb6ffb34-kube-api-access-5swp2\") pod \"collect-profiles-29416740-jwp69\" (UID: \"09750040-6c82-466f-8cea-f040fb6ffb34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-jwp69" Dec 06 07:00:00 crc kubenswrapper[4823]: I1206 07:00:00.478527 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-jwp69" Dec 06 07:00:01 crc kubenswrapper[4823]: I1206 07:00:01.042371 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416740-jwp69"] Dec 06 07:00:01 crc kubenswrapper[4823]: I1206 07:00:01.320682 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-jwp69" event={"ID":"09750040-6c82-466f-8cea-f040fb6ffb34","Type":"ContainerStarted","Data":"a9bee78d997d8f83f0f433a932774ae808471261c0a84537a4b6fac400518a3d"} Dec 06 07:00:01 crc kubenswrapper[4823]: I1206 07:00:01.322288 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-jwp69" event={"ID":"09750040-6c82-466f-8cea-f040fb6ffb34","Type":"ContainerStarted","Data":"43a872d04459004af514d285c86ac061814452ff60122bde130fd24f626f5b28"} Dec 06 07:00:02 crc kubenswrapper[4823]: I1206 07:00:02.332863 4823 generic.go:334] "Generic (PLEG): container finished" podID="09750040-6c82-466f-8cea-f040fb6ffb34" containerID="a9bee78d997d8f83f0f433a932774ae808471261c0a84537a4b6fac400518a3d" exitCode=0 Dec 06 07:00:02 crc kubenswrapper[4823]: I1206 07:00:02.332982 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-jwp69" event={"ID":"09750040-6c82-466f-8cea-f040fb6ffb34","Type":"ContainerDied","Data":"a9bee78d997d8f83f0f433a932774ae808471261c0a84537a4b6fac400518a3d"} Dec 06 07:00:03 crc kubenswrapper[4823]: I1206 07:00:03.782610 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-jwp69" Dec 06 07:00:03 crc kubenswrapper[4823]: I1206 07:00:03.946118 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09750040-6c82-466f-8cea-f040fb6ffb34-config-volume\") pod \"09750040-6c82-466f-8cea-f040fb6ffb34\" (UID: \"09750040-6c82-466f-8cea-f040fb6ffb34\") " Dec 06 07:00:03 crc kubenswrapper[4823]: I1206 07:00:03.946301 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/09750040-6c82-466f-8cea-f040fb6ffb34-secret-volume\") pod \"09750040-6c82-466f-8cea-f040fb6ffb34\" (UID: \"09750040-6c82-466f-8cea-f040fb6ffb34\") " Dec 06 07:00:03 crc kubenswrapper[4823]: I1206 07:00:03.946424 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5swp2\" (UniqueName: \"kubernetes.io/projected/09750040-6c82-466f-8cea-f040fb6ffb34-kube-api-access-5swp2\") pod \"09750040-6c82-466f-8cea-f040fb6ffb34\" (UID: \"09750040-6c82-466f-8cea-f040fb6ffb34\") " Dec 06 07:00:03 crc kubenswrapper[4823]: I1206 07:00:03.947864 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09750040-6c82-466f-8cea-f040fb6ffb34-config-volume" (OuterVolumeSpecName: "config-volume") pod "09750040-6c82-466f-8cea-f040fb6ffb34" (UID: "09750040-6c82-466f-8cea-f040fb6ffb34"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:00:03 crc kubenswrapper[4823]: I1206 07:00:03.952418 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09750040-6c82-466f-8cea-f040fb6ffb34-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "09750040-6c82-466f-8cea-f040fb6ffb34" (UID: "09750040-6c82-466f-8cea-f040fb6ffb34"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:00:03 crc kubenswrapper[4823]: I1206 07:00:03.952749 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09750040-6c82-466f-8cea-f040fb6ffb34-kube-api-access-5swp2" (OuterVolumeSpecName: "kube-api-access-5swp2") pod "09750040-6c82-466f-8cea-f040fb6ffb34" (UID: "09750040-6c82-466f-8cea-f040fb6ffb34"). InnerVolumeSpecName "kube-api-access-5swp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:00:04 crc kubenswrapper[4823]: I1206 07:00:04.048787 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09750040-6c82-466f-8cea-f040fb6ffb34-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 07:00:04 crc kubenswrapper[4823]: I1206 07:00:04.048823 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/09750040-6c82-466f-8cea-f040fb6ffb34-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 07:00:04 crc kubenswrapper[4823]: I1206 07:00:04.048835 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5swp2\" (UniqueName: \"kubernetes.io/projected/09750040-6c82-466f-8cea-f040fb6ffb34-kube-api-access-5swp2\") on node \"crc\" DevicePath \"\"" Dec 06 07:00:04 crc kubenswrapper[4823]: I1206 07:00:04.352335 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-jwp69" event={"ID":"09750040-6c82-466f-8cea-f040fb6ffb34","Type":"ContainerDied","Data":"43a872d04459004af514d285c86ac061814452ff60122bde130fd24f626f5b28"} Dec 06 07:00:04 crc kubenswrapper[4823]: I1206 07:00:04.352704 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43a872d04459004af514d285c86ac061814452ff60122bde130fd24f626f5b28" Dec 06 07:00:04 crc kubenswrapper[4823]: I1206 07:00:04.352427 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-jwp69" Dec 06 07:00:04 crc kubenswrapper[4823]: I1206 07:00:04.418291 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416695-czmn9"] Dec 06 07:00:04 crc kubenswrapper[4823]: I1206 07:00:04.427607 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416695-czmn9"] Dec 06 07:00:05 crc kubenswrapper[4823]: I1206 07:00:05.239801 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95a0336c-e3d8-4290-a0bb-62d7a7f357ba" path="/var/lib/kubelet/pods/95a0336c-e3d8-4290-a0bb-62d7a7f357ba/volumes" Dec 06 07:00:06 crc kubenswrapper[4823]: I1206 07:00:06.051822 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:00:06 crc kubenswrapper[4823]: I1206 07:00:06.052220 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:00:06 crc kubenswrapper[4823]: I1206 07:00:06.052285 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 07:00:06 crc kubenswrapper[4823]: I1206 07:00:06.053300 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5d2ae4a22e402696798d2d26ede0bf777dfb9593268a6c6415aab7996e8a81d"} pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:00:06 crc kubenswrapper[4823]: I1206 07:00:06.053387 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" containerID="cri-o://e5d2ae4a22e402696798d2d26ede0bf777dfb9593268a6c6415aab7996e8a81d" gracePeriod=600 Dec 06 07:00:06 crc kubenswrapper[4823]: I1206 07:00:06.378300 4823 generic.go:334] "Generic (PLEG): container finished" podID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerID="e5d2ae4a22e402696798d2d26ede0bf777dfb9593268a6c6415aab7996e8a81d" exitCode=0 Dec 06 07:00:06 crc kubenswrapper[4823]: I1206 07:00:06.378845 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerDied","Data":"e5d2ae4a22e402696798d2d26ede0bf777dfb9593268a6c6415aab7996e8a81d"} Dec 06 07:00:06 crc kubenswrapper[4823]: I1206 07:00:06.378898 4823 scope.go:117] "RemoveContainer" containerID="129ebd314bb336af5f968a117b7d7d84f6d557844ec0c7f6f8c8aa752114e423" Dec 06 07:00:07 crc kubenswrapper[4823]: I1206 07:00:07.389031 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerStarted","Data":"1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e"} Dec 06 07:00:17 crc kubenswrapper[4823]: I1206 07:00:17.600831 4823 generic.go:334] "Generic (PLEG): container finished" podID="49f9da65-c637-468d-b0e6-7e8f3a9c6a6a" containerID="98ac4b7553c6c779053d58d8b38c2c25220e56d16ff124bb2ce67c6eadff23fd" exitCode=0 Dec 06 07:00:17 crc kubenswrapper[4823]: I1206 07:00:17.600931 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lhwv8" event={"ID":"49f9da65-c637-468d-b0e6-7e8f3a9c6a6a","Type":"ContainerDied","Data":"98ac4b7553c6c779053d58d8b38c2c25220e56d16ff124bb2ce67c6eadff23fd"} Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.032645 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lhwv8" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.163360 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49f9da65-c637-468d-b0e6-7e8f3a9c6a6a-ssh-key\") pod \"49f9da65-c637-468d-b0e6-7e8f3a9c6a6a\" (UID: \"49f9da65-c637-468d-b0e6-7e8f3a9c6a6a\") " Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.163736 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnrng\" (UniqueName: \"kubernetes.io/projected/49f9da65-c637-468d-b0e6-7e8f3a9c6a6a-kube-api-access-pnrng\") pod \"49f9da65-c637-468d-b0e6-7e8f3a9c6a6a\" (UID: \"49f9da65-c637-468d-b0e6-7e8f3a9c6a6a\") " Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.163956 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49f9da65-c637-468d-b0e6-7e8f3a9c6a6a-inventory\") pod \"49f9da65-c637-468d-b0e6-7e8f3a9c6a6a\" (UID: \"49f9da65-c637-468d-b0e6-7e8f3a9c6a6a\") " Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.169440 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49f9da65-c637-468d-b0e6-7e8f3a9c6a6a-kube-api-access-pnrng" (OuterVolumeSpecName: "kube-api-access-pnrng") pod "49f9da65-c637-468d-b0e6-7e8f3a9c6a6a" (UID: "49f9da65-c637-468d-b0e6-7e8f3a9c6a6a"). InnerVolumeSpecName "kube-api-access-pnrng". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.198689 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49f9da65-c637-468d-b0e6-7e8f3a9c6a6a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "49f9da65-c637-468d-b0e6-7e8f3a9c6a6a" (UID: "49f9da65-c637-468d-b0e6-7e8f3a9c6a6a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.199911 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49f9da65-c637-468d-b0e6-7e8f3a9c6a6a-inventory" (OuterVolumeSpecName: "inventory") pod "49f9da65-c637-468d-b0e6-7e8f3a9c6a6a" (UID: "49f9da65-c637-468d-b0e6-7e8f3a9c6a6a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.421571 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49f9da65-c637-468d-b0e6-7e8f3a9c6a6a-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.421616 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49f9da65-c637-468d-b0e6-7e8f3a9c6a6a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.421630 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnrng\" (UniqueName: \"kubernetes.io/projected/49f9da65-c637-468d-b0e6-7e8f3a9c6a6a-kube-api-access-pnrng\") on node \"crc\" DevicePath \"\"" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.620067 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lhwv8" event={"ID":"49f9da65-c637-468d-b0e6-7e8f3a9c6a6a","Type":"ContainerDied","Data":"2243f2b4b74a9911faa2b27134067d6ef0e36e03f120c44f59cc6deba42a9555"} Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.620107 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2243f2b4b74a9911faa2b27134067d6ef0e36e03f120c44f59cc6deba42a9555" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.620156 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lhwv8" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.694239 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-56wlv"] Dec 06 07:00:19 crc kubenswrapper[4823]: E1206 07:00:19.694717 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09750040-6c82-466f-8cea-f040fb6ffb34" containerName="collect-profiles" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.694738 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="09750040-6c82-466f-8cea-f040fb6ffb34" containerName="collect-profiles" Dec 06 07:00:19 crc kubenswrapper[4823]: E1206 07:00:19.694759 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f9da65-c637-468d-b0e6-7e8f3a9c6a6a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.694768 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f9da65-c637-468d-b0e6-7e8f3a9c6a6a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.694997 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="09750040-6c82-466f-8cea-f040fb6ffb34" containerName="collect-profiles" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.695025 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="49f9da65-c637-468d-b0e6-7e8f3a9c6a6a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.695706 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-56wlv" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.698128 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.698216 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.698783 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xqh9k" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.699347 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.703174 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-56wlv"] Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.728261 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2455g\" (UniqueName: \"kubernetes.io/projected/c2f3406e-802c-4387-90f6-51980c01408a-kube-api-access-2455g\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-56wlv\" (UID: \"c2f3406e-802c-4387-90f6-51980c01408a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-56wlv" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.728325 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c2f3406e-802c-4387-90f6-51980c01408a-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-56wlv\" (UID: \"c2f3406e-802c-4387-90f6-51980c01408a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-56wlv" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.728370 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2f3406e-802c-4387-90f6-51980c01408a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-56wlv\" (UID: \"c2f3406e-802c-4387-90f6-51980c01408a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-56wlv" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.830222 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2455g\" (UniqueName: \"kubernetes.io/projected/c2f3406e-802c-4387-90f6-51980c01408a-kube-api-access-2455g\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-56wlv\" (UID: \"c2f3406e-802c-4387-90f6-51980c01408a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-56wlv" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.830275 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c2f3406e-802c-4387-90f6-51980c01408a-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-56wlv\" (UID: \"c2f3406e-802c-4387-90f6-51980c01408a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-56wlv" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.830321 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2f3406e-802c-4387-90f6-51980c01408a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-56wlv\" (UID: \"c2f3406e-802c-4387-90f6-51980c01408a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-56wlv" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.834782 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2f3406e-802c-4387-90f6-51980c01408a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-56wlv\" (UID: \"c2f3406e-802c-4387-90f6-51980c01408a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-56wlv" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.834859 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c2f3406e-802c-4387-90f6-51980c01408a-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-56wlv\" (UID: \"c2f3406e-802c-4387-90f6-51980c01408a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-56wlv" Dec 06 07:00:19 crc kubenswrapper[4823]: I1206 07:00:19.845863 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2455g\" (UniqueName: \"kubernetes.io/projected/c2f3406e-802c-4387-90f6-51980c01408a-kube-api-access-2455g\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-56wlv\" (UID: \"c2f3406e-802c-4387-90f6-51980c01408a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-56wlv" Dec 06 07:00:20 crc kubenswrapper[4823]: I1206 07:00:20.014759 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-56wlv" Dec 06 07:00:20 crc kubenswrapper[4823]: W1206 07:00:20.596623 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2f3406e_802c_4387_90f6_51980c01408a.slice/crio-cedc3dbc0f2f288aaf5f75e1ed15d840ece96ffb352e872f1e7ded29a18fdafe WatchSource:0}: Error finding container cedc3dbc0f2f288aaf5f75e1ed15d840ece96ffb352e872f1e7ded29a18fdafe: Status 404 returned error can't find the container with id cedc3dbc0f2f288aaf5f75e1ed15d840ece96ffb352e872f1e7ded29a18fdafe Dec 06 07:00:20 crc kubenswrapper[4823]: I1206 07:00:20.602055 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-56wlv"] Dec 06 07:00:20 crc kubenswrapper[4823]: I1206 07:00:20.633118 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-56wlv" event={"ID":"c2f3406e-802c-4387-90f6-51980c01408a","Type":"ContainerStarted","Data":"cedc3dbc0f2f288aaf5f75e1ed15d840ece96ffb352e872f1e7ded29a18fdafe"} Dec 06 07:00:22 crc kubenswrapper[4823]: I1206 07:00:22.761066 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-56wlv" event={"ID":"c2f3406e-802c-4387-90f6-51980c01408a","Type":"ContainerStarted","Data":"50f04ed66123d26b04496e70446da760232ddcff06f321e8ad286bcf6adbc23c"} Dec 06 07:00:22 crc kubenswrapper[4823]: I1206 07:00:22.792288 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-56wlv" podStartSLOduration=3.259717051 podStartE2EDuration="3.792264312s" podCreationTimestamp="2025-12-06 07:00:19 +0000 UTC" firstStartedPulling="2025-12-06 07:00:20.599629064 +0000 UTC m=+2121.885381024" lastFinishedPulling="2025-12-06 07:00:21.132176325 +0000 UTC m=+2122.417928285" observedRunningTime="2025-12-06 07:00:22.785759563 +0000 UTC m=+2124.071511513" watchObservedRunningTime="2025-12-06 07:00:22.792264312 +0000 UTC m=+2124.078016272" Dec 06 07:00:34 crc kubenswrapper[4823]: I1206 07:00:34.049139 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-4qhhv"] Dec 06 07:00:34 crc kubenswrapper[4823]: I1206 07:00:34.059600 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-4qhhv"] Dec 06 07:00:35 crc kubenswrapper[4823]: I1206 07:00:35.153829 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d03281d-e24f-4882-9428-1e3e30ca70ae" path="/var/lib/kubelet/pods/4d03281d-e24f-4882-9428-1e3e30ca70ae/volumes" Dec 06 07:00:43 crc kubenswrapper[4823]: I1206 07:00:43.259912 4823 scope.go:117] "RemoveContainer" containerID="116d2b7444ef3d4081fce406e2824eced8461166365179783c15162e5b1c4fca" Dec 06 07:00:43 crc kubenswrapper[4823]: I1206 07:00:43.317209 4823 scope.go:117] "RemoveContainer" containerID="b7d08a7b792aed99b3b7b4bc1d85d1df3f88459353465f30190b1fac28347c29" Dec 06 07:00:43 crc kubenswrapper[4823]: I1206 07:00:43.344703 4823 scope.go:117] "RemoveContainer" containerID="5ebef798af4f48fab5049f43add77c2defb01ff161364f7cc081d96a1d477f59" Dec 06 07:00:43 crc kubenswrapper[4823]: I1206 07:00:43.411861 4823 scope.go:117] "RemoveContainer" containerID="71d0841bd99163cec97b53ba0cf403bc9f24472eb24888d603b77b1ebc93c420" Dec 06 07:01:00 crc kubenswrapper[4823]: I1206 07:01:00.159469 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29416741-47p2f"] Dec 06 07:01:00 crc kubenswrapper[4823]: I1206 07:01:00.162162 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416741-47p2f" Dec 06 07:01:00 crc kubenswrapper[4823]: I1206 07:01:00.172593 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29416741-47p2f"] Dec 06 07:01:00 crc kubenswrapper[4823]: I1206 07:01:00.283037 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/817eedb8-20c3-48ab-b610-60b5a06ee67f-fernet-keys\") pod \"keystone-cron-29416741-47p2f\" (UID: \"817eedb8-20c3-48ab-b610-60b5a06ee67f\") " pod="openstack/keystone-cron-29416741-47p2f" Dec 06 07:01:00 crc kubenswrapper[4823]: I1206 07:01:00.283170 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817eedb8-20c3-48ab-b610-60b5a06ee67f-combined-ca-bundle\") pod \"keystone-cron-29416741-47p2f\" (UID: \"817eedb8-20c3-48ab-b610-60b5a06ee67f\") " pod="openstack/keystone-cron-29416741-47p2f" Dec 06 07:01:00 crc kubenswrapper[4823]: I1206 07:01:00.283225 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/817eedb8-20c3-48ab-b610-60b5a06ee67f-config-data\") pod \"keystone-cron-29416741-47p2f\" (UID: \"817eedb8-20c3-48ab-b610-60b5a06ee67f\") " pod="openstack/keystone-cron-29416741-47p2f" Dec 06 07:01:00 crc kubenswrapper[4823]: I1206 07:01:00.283280 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g95tt\" (UniqueName: \"kubernetes.io/projected/817eedb8-20c3-48ab-b610-60b5a06ee67f-kube-api-access-g95tt\") pod \"keystone-cron-29416741-47p2f\" (UID: \"817eedb8-20c3-48ab-b610-60b5a06ee67f\") " pod="openstack/keystone-cron-29416741-47p2f" Dec 06 07:01:00 crc kubenswrapper[4823]: I1206 07:01:00.385442 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/817eedb8-20c3-48ab-b610-60b5a06ee67f-fernet-keys\") pod \"keystone-cron-29416741-47p2f\" (UID: \"817eedb8-20c3-48ab-b610-60b5a06ee67f\") " pod="openstack/keystone-cron-29416741-47p2f" Dec 06 07:01:00 crc kubenswrapper[4823]: I1206 07:01:00.386555 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817eedb8-20c3-48ab-b610-60b5a06ee67f-combined-ca-bundle\") pod \"keystone-cron-29416741-47p2f\" (UID: \"817eedb8-20c3-48ab-b610-60b5a06ee67f\") " pod="openstack/keystone-cron-29416741-47p2f" Dec 06 07:01:00 crc kubenswrapper[4823]: I1206 07:01:00.386612 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/817eedb8-20c3-48ab-b610-60b5a06ee67f-config-data\") pod \"keystone-cron-29416741-47p2f\" (UID: \"817eedb8-20c3-48ab-b610-60b5a06ee67f\") " pod="openstack/keystone-cron-29416741-47p2f" Dec 06 07:01:00 crc kubenswrapper[4823]: I1206 07:01:00.386647 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g95tt\" (UniqueName: \"kubernetes.io/projected/817eedb8-20c3-48ab-b610-60b5a06ee67f-kube-api-access-g95tt\") pod \"keystone-cron-29416741-47p2f\" (UID: \"817eedb8-20c3-48ab-b610-60b5a06ee67f\") " pod="openstack/keystone-cron-29416741-47p2f" Dec 06 07:01:00 crc kubenswrapper[4823]: I1206 07:01:00.394062 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817eedb8-20c3-48ab-b610-60b5a06ee67f-combined-ca-bundle\") pod \"keystone-cron-29416741-47p2f\" (UID: \"817eedb8-20c3-48ab-b610-60b5a06ee67f\") " pod="openstack/keystone-cron-29416741-47p2f" Dec 06 07:01:00 crc kubenswrapper[4823]: I1206 07:01:00.394171 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/817eedb8-20c3-48ab-b610-60b5a06ee67f-fernet-keys\") pod \"keystone-cron-29416741-47p2f\" (UID: \"817eedb8-20c3-48ab-b610-60b5a06ee67f\") " pod="openstack/keystone-cron-29416741-47p2f" Dec 06 07:01:00 crc kubenswrapper[4823]: I1206 07:01:00.402277 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/817eedb8-20c3-48ab-b610-60b5a06ee67f-config-data\") pod \"keystone-cron-29416741-47p2f\" (UID: \"817eedb8-20c3-48ab-b610-60b5a06ee67f\") " pod="openstack/keystone-cron-29416741-47p2f" Dec 06 07:01:00 crc kubenswrapper[4823]: I1206 07:01:00.410393 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g95tt\" (UniqueName: \"kubernetes.io/projected/817eedb8-20c3-48ab-b610-60b5a06ee67f-kube-api-access-g95tt\") pod \"keystone-cron-29416741-47p2f\" (UID: \"817eedb8-20c3-48ab-b610-60b5a06ee67f\") " pod="openstack/keystone-cron-29416741-47p2f" Dec 06 07:01:00 crc kubenswrapper[4823]: I1206 07:01:00.482490 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416741-47p2f" Dec 06 07:01:00 crc kubenswrapper[4823]: W1206 07:01:00.963325 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod817eedb8_20c3_48ab_b610_60b5a06ee67f.slice/crio-29a2dc1ce3287d78dbe2406c4dde5c33bddfad90bbb37bae2a902e072c70dea8 WatchSource:0}: Error finding container 29a2dc1ce3287d78dbe2406c4dde5c33bddfad90bbb37bae2a902e072c70dea8: Status 404 returned error can't find the container with id 29a2dc1ce3287d78dbe2406c4dde5c33bddfad90bbb37bae2a902e072c70dea8 Dec 06 07:01:00 crc kubenswrapper[4823]: I1206 07:01:00.965310 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29416741-47p2f"] Dec 06 07:01:01 crc kubenswrapper[4823]: I1206 07:01:01.155033 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416741-47p2f" event={"ID":"817eedb8-20c3-48ab-b610-60b5a06ee67f","Type":"ContainerStarted","Data":"29a2dc1ce3287d78dbe2406c4dde5c33bddfad90bbb37bae2a902e072c70dea8"} Dec 06 07:01:02 crc kubenswrapper[4823]: I1206 07:01:02.162317 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416741-47p2f" event={"ID":"817eedb8-20c3-48ab-b610-60b5a06ee67f","Type":"ContainerStarted","Data":"556371a3e6dc1de434abad861d7cc16a9289bf52ce638ede1a3012614b406951"} Dec 06 07:01:02 crc kubenswrapper[4823]: I1206 07:01:02.180230 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29416741-47p2f" podStartSLOduration=2.180212153 podStartE2EDuration="2.180212153s" podCreationTimestamp="2025-12-06 07:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:01:02.176969909 +0000 UTC m=+2163.462721859" watchObservedRunningTime="2025-12-06 07:01:02.180212153 +0000 UTC m=+2163.465964113" Dec 06 07:01:04 crc kubenswrapper[4823]: I1206 07:01:04.181878 4823 generic.go:334] "Generic (PLEG): container finished" podID="817eedb8-20c3-48ab-b610-60b5a06ee67f" containerID="556371a3e6dc1de434abad861d7cc16a9289bf52ce638ede1a3012614b406951" exitCode=0 Dec 06 07:01:04 crc kubenswrapper[4823]: I1206 07:01:04.181932 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416741-47p2f" event={"ID":"817eedb8-20c3-48ab-b610-60b5a06ee67f","Type":"ContainerDied","Data":"556371a3e6dc1de434abad861d7cc16a9289bf52ce638ede1a3012614b406951"} Dec 06 07:01:05 crc kubenswrapper[4823]: I1206 07:01:05.532379 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416741-47p2f" Dec 06 07:01:05 crc kubenswrapper[4823]: I1206 07:01:05.596744 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g95tt\" (UniqueName: \"kubernetes.io/projected/817eedb8-20c3-48ab-b610-60b5a06ee67f-kube-api-access-g95tt\") pod \"817eedb8-20c3-48ab-b610-60b5a06ee67f\" (UID: \"817eedb8-20c3-48ab-b610-60b5a06ee67f\") " Dec 06 07:01:05 crc kubenswrapper[4823]: I1206 07:01:05.596852 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/817eedb8-20c3-48ab-b610-60b5a06ee67f-fernet-keys\") pod \"817eedb8-20c3-48ab-b610-60b5a06ee67f\" (UID: \"817eedb8-20c3-48ab-b610-60b5a06ee67f\") " Dec 06 07:01:05 crc kubenswrapper[4823]: I1206 07:01:05.596901 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817eedb8-20c3-48ab-b610-60b5a06ee67f-combined-ca-bundle\") pod \"817eedb8-20c3-48ab-b610-60b5a06ee67f\" (UID: \"817eedb8-20c3-48ab-b610-60b5a06ee67f\") " Dec 06 07:01:05 crc kubenswrapper[4823]: I1206 07:01:05.597022 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/817eedb8-20c3-48ab-b610-60b5a06ee67f-config-data\") pod \"817eedb8-20c3-48ab-b610-60b5a06ee67f\" (UID: \"817eedb8-20c3-48ab-b610-60b5a06ee67f\") " Dec 06 07:01:05 crc kubenswrapper[4823]: I1206 07:01:05.603213 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/817eedb8-20c3-48ab-b610-60b5a06ee67f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "817eedb8-20c3-48ab-b610-60b5a06ee67f" (UID: "817eedb8-20c3-48ab-b610-60b5a06ee67f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:01:05 crc kubenswrapper[4823]: I1206 07:01:05.603454 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/817eedb8-20c3-48ab-b610-60b5a06ee67f-kube-api-access-g95tt" (OuterVolumeSpecName: "kube-api-access-g95tt") pod "817eedb8-20c3-48ab-b610-60b5a06ee67f" (UID: "817eedb8-20c3-48ab-b610-60b5a06ee67f"). InnerVolumeSpecName "kube-api-access-g95tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:01:05 crc kubenswrapper[4823]: I1206 07:01:05.646596 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/817eedb8-20c3-48ab-b610-60b5a06ee67f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "817eedb8-20c3-48ab-b610-60b5a06ee67f" (UID: "817eedb8-20c3-48ab-b610-60b5a06ee67f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:01:05 crc kubenswrapper[4823]: I1206 07:01:05.673856 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/817eedb8-20c3-48ab-b610-60b5a06ee67f-config-data" (OuterVolumeSpecName: "config-data") pod "817eedb8-20c3-48ab-b610-60b5a06ee67f" (UID: "817eedb8-20c3-48ab-b610-60b5a06ee67f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:01:05 crc kubenswrapper[4823]: I1206 07:01:05.698994 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g95tt\" (UniqueName: \"kubernetes.io/projected/817eedb8-20c3-48ab-b610-60b5a06ee67f-kube-api-access-g95tt\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:05 crc kubenswrapper[4823]: I1206 07:01:05.699021 4823 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/817eedb8-20c3-48ab-b610-60b5a06ee67f-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:05 crc kubenswrapper[4823]: I1206 07:01:05.699034 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817eedb8-20c3-48ab-b610-60b5a06ee67f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:05 crc kubenswrapper[4823]: I1206 07:01:05.699043 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/817eedb8-20c3-48ab-b610-60b5a06ee67f-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:06 crc kubenswrapper[4823]: I1206 07:01:06.201441 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416741-47p2f" event={"ID":"817eedb8-20c3-48ab-b610-60b5a06ee67f","Type":"ContainerDied","Data":"29a2dc1ce3287d78dbe2406c4dde5c33bddfad90bbb37bae2a902e072c70dea8"} Dec 06 07:01:06 crc kubenswrapper[4823]: I1206 07:01:06.201494 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29a2dc1ce3287d78dbe2406c4dde5c33bddfad90bbb37bae2a902e072c70dea8" Dec 06 07:01:06 crc kubenswrapper[4823]: I1206 07:01:06.201602 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416741-47p2f" Dec 06 07:01:20 crc kubenswrapper[4823]: I1206 07:01:20.337860 4823 generic.go:334] "Generic (PLEG): container finished" podID="c2f3406e-802c-4387-90f6-51980c01408a" containerID="50f04ed66123d26b04496e70446da760232ddcff06f321e8ad286bcf6adbc23c" exitCode=0 Dec 06 07:01:20 crc kubenswrapper[4823]: I1206 07:01:20.338035 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-56wlv" event={"ID":"c2f3406e-802c-4387-90f6-51980c01408a","Type":"ContainerDied","Data":"50f04ed66123d26b04496e70446da760232ddcff06f321e8ad286bcf6adbc23c"} Dec 06 07:01:21 crc kubenswrapper[4823]: I1206 07:01:21.798902 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-56wlv" Dec 06 07:01:21 crc kubenswrapper[4823]: I1206 07:01:21.915068 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c2f3406e-802c-4387-90f6-51980c01408a-ssh-key\") pod \"c2f3406e-802c-4387-90f6-51980c01408a\" (UID: \"c2f3406e-802c-4387-90f6-51980c01408a\") " Dec 06 07:01:21 crc kubenswrapper[4823]: I1206 07:01:21.915984 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2455g\" (UniqueName: \"kubernetes.io/projected/c2f3406e-802c-4387-90f6-51980c01408a-kube-api-access-2455g\") pod \"c2f3406e-802c-4387-90f6-51980c01408a\" (UID: \"c2f3406e-802c-4387-90f6-51980c01408a\") " Dec 06 07:01:21 crc kubenswrapper[4823]: I1206 07:01:21.916127 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2f3406e-802c-4387-90f6-51980c01408a-inventory\") pod \"c2f3406e-802c-4387-90f6-51980c01408a\" (UID: \"c2f3406e-802c-4387-90f6-51980c01408a\") " Dec 06 07:01:21 crc kubenswrapper[4823]: I1206 07:01:21.921982 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2f3406e-802c-4387-90f6-51980c01408a-kube-api-access-2455g" (OuterVolumeSpecName: "kube-api-access-2455g") pod "c2f3406e-802c-4387-90f6-51980c01408a" (UID: "c2f3406e-802c-4387-90f6-51980c01408a"). InnerVolumeSpecName "kube-api-access-2455g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:01:21 crc kubenswrapper[4823]: I1206 07:01:21.949848 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2f3406e-802c-4387-90f6-51980c01408a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c2f3406e-802c-4387-90f6-51980c01408a" (UID: "c2f3406e-802c-4387-90f6-51980c01408a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:01:21 crc kubenswrapper[4823]: I1206 07:01:21.949870 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2f3406e-802c-4387-90f6-51980c01408a-inventory" (OuterVolumeSpecName: "inventory") pod "c2f3406e-802c-4387-90f6-51980c01408a" (UID: "c2f3406e-802c-4387-90f6-51980c01408a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.018155 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2455g\" (UniqueName: \"kubernetes.io/projected/c2f3406e-802c-4387-90f6-51980c01408a-kube-api-access-2455g\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.018183 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2f3406e-802c-4387-90f6-51980c01408a-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.018191 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c2f3406e-802c-4387-90f6-51980c01408a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.361414 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-56wlv" event={"ID":"c2f3406e-802c-4387-90f6-51980c01408a","Type":"ContainerDied","Data":"cedc3dbc0f2f288aaf5f75e1ed15d840ece96ffb352e872f1e7ded29a18fdafe"} Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.361461 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cedc3dbc0f2f288aaf5f75e1ed15d840ece96ffb352e872f1e7ded29a18fdafe" Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.361467 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-56wlv" Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.439473 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-m249v"] Dec 06 07:01:22 crc kubenswrapper[4823]: E1206 07:01:22.440050 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="817eedb8-20c3-48ab-b610-60b5a06ee67f" containerName="keystone-cron" Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.440079 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="817eedb8-20c3-48ab-b610-60b5a06ee67f" containerName="keystone-cron" Dec 06 07:01:22 crc kubenswrapper[4823]: E1206 07:01:22.440097 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2f3406e-802c-4387-90f6-51980c01408a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.440108 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2f3406e-802c-4387-90f6-51980c01408a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.440395 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="817eedb8-20c3-48ab-b610-60b5a06ee67f" containerName="keystone-cron" Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.440425 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2f3406e-802c-4387-90f6-51980c01408a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.441428 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-m249v" Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.444037 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.444107 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xqh9k" Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.444351 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.444535 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.451807 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-m249v"] Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.631029 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11dfacbe-1b10-4f76-8cbd-2a272679c18c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-m249v\" (UID: \"11dfacbe-1b10-4f76-8cbd-2a272679c18c\") " pod="openstack/ssh-known-hosts-edpm-deployment-m249v" Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.631119 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/11dfacbe-1b10-4f76-8cbd-2a272679c18c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-m249v\" (UID: \"11dfacbe-1b10-4f76-8cbd-2a272679c18c\") " pod="openstack/ssh-known-hosts-edpm-deployment-m249v" Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.631799 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhhmp\" (UniqueName: \"kubernetes.io/projected/11dfacbe-1b10-4f76-8cbd-2a272679c18c-kube-api-access-rhhmp\") pod \"ssh-known-hosts-edpm-deployment-m249v\" (UID: \"11dfacbe-1b10-4f76-8cbd-2a272679c18c\") " pod="openstack/ssh-known-hosts-edpm-deployment-m249v" Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.733639 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhhmp\" (UniqueName: \"kubernetes.io/projected/11dfacbe-1b10-4f76-8cbd-2a272679c18c-kube-api-access-rhhmp\") pod \"ssh-known-hosts-edpm-deployment-m249v\" (UID: \"11dfacbe-1b10-4f76-8cbd-2a272679c18c\") " pod="openstack/ssh-known-hosts-edpm-deployment-m249v" Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.733712 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11dfacbe-1b10-4f76-8cbd-2a272679c18c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-m249v\" (UID: \"11dfacbe-1b10-4f76-8cbd-2a272679c18c\") " pod="openstack/ssh-known-hosts-edpm-deployment-m249v" Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.733749 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/11dfacbe-1b10-4f76-8cbd-2a272679c18c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-m249v\" (UID: \"11dfacbe-1b10-4f76-8cbd-2a272679c18c\") " pod="openstack/ssh-known-hosts-edpm-deployment-m249v" Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.739508 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/11dfacbe-1b10-4f76-8cbd-2a272679c18c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-m249v\" (UID: \"11dfacbe-1b10-4f76-8cbd-2a272679c18c\") " pod="openstack/ssh-known-hosts-edpm-deployment-m249v" Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.739802 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11dfacbe-1b10-4f76-8cbd-2a272679c18c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-m249v\" (UID: \"11dfacbe-1b10-4f76-8cbd-2a272679c18c\") " pod="openstack/ssh-known-hosts-edpm-deployment-m249v" Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.756044 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhhmp\" (UniqueName: \"kubernetes.io/projected/11dfacbe-1b10-4f76-8cbd-2a272679c18c-kube-api-access-rhhmp\") pod \"ssh-known-hosts-edpm-deployment-m249v\" (UID: \"11dfacbe-1b10-4f76-8cbd-2a272679c18c\") " pod="openstack/ssh-known-hosts-edpm-deployment-m249v" Dec 06 07:01:22 crc kubenswrapper[4823]: I1206 07:01:22.761562 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-m249v" Dec 06 07:01:23 crc kubenswrapper[4823]: I1206 07:01:23.430562 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-m249v"] Dec 06 07:01:24 crc kubenswrapper[4823]: I1206 07:01:24.380601 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-m249v" event={"ID":"11dfacbe-1b10-4f76-8cbd-2a272679c18c","Type":"ContainerStarted","Data":"afde5bfca00c82046bc06838d160afd31118b81d4c147b89b324201c37a2e368"} Dec 06 07:01:24 crc kubenswrapper[4823]: I1206 07:01:24.380991 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-m249v" event={"ID":"11dfacbe-1b10-4f76-8cbd-2a272679c18c","Type":"ContainerStarted","Data":"0d4c556e91978589c5e304b98cb2b9bd7f0da9cb634cf4af273e3ef4912b2ee6"} Dec 06 07:01:24 crc kubenswrapper[4823]: I1206 07:01:24.412209 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-m249v" podStartSLOduration=1.900200258 podStartE2EDuration="2.412189501s" podCreationTimestamp="2025-12-06 07:01:22 +0000 UTC" firstStartedPulling="2025-12-06 07:01:23.441267748 +0000 UTC m=+2184.727019708" lastFinishedPulling="2025-12-06 07:01:23.953256991 +0000 UTC m=+2185.239008951" observedRunningTime="2025-12-06 07:01:24.407717091 +0000 UTC m=+2185.693469051" watchObservedRunningTime="2025-12-06 07:01:24.412189501 +0000 UTC m=+2185.697941451" Dec 06 07:01:32 crc kubenswrapper[4823]: I1206 07:01:32.458726 4823 generic.go:334] "Generic (PLEG): container finished" podID="11dfacbe-1b10-4f76-8cbd-2a272679c18c" containerID="afde5bfca00c82046bc06838d160afd31118b81d4c147b89b324201c37a2e368" exitCode=0 Dec 06 07:01:32 crc kubenswrapper[4823]: I1206 07:01:32.458823 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-m249v" event={"ID":"11dfacbe-1b10-4f76-8cbd-2a272679c18c","Type":"ContainerDied","Data":"afde5bfca00c82046bc06838d160afd31118b81d4c147b89b324201c37a2e368"} Dec 06 07:01:33 crc kubenswrapper[4823]: I1206 07:01:33.942632 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-m249v" Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.095409 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhhmp\" (UniqueName: \"kubernetes.io/projected/11dfacbe-1b10-4f76-8cbd-2a272679c18c-kube-api-access-rhhmp\") pod \"11dfacbe-1b10-4f76-8cbd-2a272679c18c\" (UID: \"11dfacbe-1b10-4f76-8cbd-2a272679c18c\") " Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.095635 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/11dfacbe-1b10-4f76-8cbd-2a272679c18c-inventory-0\") pod \"11dfacbe-1b10-4f76-8cbd-2a272679c18c\" (UID: \"11dfacbe-1b10-4f76-8cbd-2a272679c18c\") " Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.095706 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11dfacbe-1b10-4f76-8cbd-2a272679c18c-ssh-key-openstack-edpm-ipam\") pod \"11dfacbe-1b10-4f76-8cbd-2a272679c18c\" (UID: \"11dfacbe-1b10-4f76-8cbd-2a272679c18c\") " Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.103591 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11dfacbe-1b10-4f76-8cbd-2a272679c18c-kube-api-access-rhhmp" (OuterVolumeSpecName: "kube-api-access-rhhmp") pod "11dfacbe-1b10-4f76-8cbd-2a272679c18c" (UID: "11dfacbe-1b10-4f76-8cbd-2a272679c18c"). InnerVolumeSpecName "kube-api-access-rhhmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.131217 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11dfacbe-1b10-4f76-8cbd-2a272679c18c-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "11dfacbe-1b10-4f76-8cbd-2a272679c18c" (UID: "11dfacbe-1b10-4f76-8cbd-2a272679c18c"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.133841 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11dfacbe-1b10-4f76-8cbd-2a272679c18c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "11dfacbe-1b10-4f76-8cbd-2a272679c18c" (UID: "11dfacbe-1b10-4f76-8cbd-2a272679c18c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.198083 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhhmp\" (UniqueName: \"kubernetes.io/projected/11dfacbe-1b10-4f76-8cbd-2a272679c18c-kube-api-access-rhhmp\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.198129 4823 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/11dfacbe-1b10-4f76-8cbd-2a272679c18c-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.198139 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11dfacbe-1b10-4f76-8cbd-2a272679c18c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.481693 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-m249v" event={"ID":"11dfacbe-1b10-4f76-8cbd-2a272679c18c","Type":"ContainerDied","Data":"0d4c556e91978589c5e304b98cb2b9bd7f0da9cb634cf4af273e3ef4912b2ee6"} Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.481742 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-m249v" Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.481746 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d4c556e91978589c5e304b98cb2b9bd7f0da9cb634cf4af273e3ef4912b2ee6" Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.581516 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-skk4j"] Dec 06 07:01:34 crc kubenswrapper[4823]: E1206 07:01:34.582027 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11dfacbe-1b10-4f76-8cbd-2a272679c18c" containerName="ssh-known-hosts-edpm-deployment" Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.582049 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="11dfacbe-1b10-4f76-8cbd-2a272679c18c" containerName="ssh-known-hosts-edpm-deployment" Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.582319 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="11dfacbe-1b10-4f76-8cbd-2a272679c18c" containerName="ssh-known-hosts-edpm-deployment" Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.583167 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skk4j" Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.585384 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.585564 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.585886 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.586846 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xqh9k" Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.599011 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-skk4j"] Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.708879 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acd2b596-5f29-44a7-9946-5027a36dd330-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-skk4j\" (UID: \"acd2b596-5f29-44a7-9946-5027a36dd330\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skk4j" Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.708975 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpfl9\" (UniqueName: \"kubernetes.io/projected/acd2b596-5f29-44a7-9946-5027a36dd330-kube-api-access-hpfl9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-skk4j\" (UID: \"acd2b596-5f29-44a7-9946-5027a36dd330\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skk4j" Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.709041 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acd2b596-5f29-44a7-9946-5027a36dd330-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-skk4j\" (UID: \"acd2b596-5f29-44a7-9946-5027a36dd330\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skk4j" Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.811768 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acd2b596-5f29-44a7-9946-5027a36dd330-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-skk4j\" (UID: \"acd2b596-5f29-44a7-9946-5027a36dd330\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skk4j" Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.811881 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpfl9\" (UniqueName: \"kubernetes.io/projected/acd2b596-5f29-44a7-9946-5027a36dd330-kube-api-access-hpfl9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-skk4j\" (UID: \"acd2b596-5f29-44a7-9946-5027a36dd330\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skk4j" Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.811926 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acd2b596-5f29-44a7-9946-5027a36dd330-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-skk4j\" (UID: \"acd2b596-5f29-44a7-9946-5027a36dd330\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skk4j" Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.819705 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acd2b596-5f29-44a7-9946-5027a36dd330-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-skk4j\" (UID: \"acd2b596-5f29-44a7-9946-5027a36dd330\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skk4j" Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.820259 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acd2b596-5f29-44a7-9946-5027a36dd330-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-skk4j\" (UID: \"acd2b596-5f29-44a7-9946-5027a36dd330\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skk4j" Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.837513 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpfl9\" (UniqueName: \"kubernetes.io/projected/acd2b596-5f29-44a7-9946-5027a36dd330-kube-api-access-hpfl9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-skk4j\" (UID: \"acd2b596-5f29-44a7-9946-5027a36dd330\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skk4j" Dec 06 07:01:34 crc kubenswrapper[4823]: I1206 07:01:34.911251 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skk4j" Dec 06 07:01:35 crc kubenswrapper[4823]: I1206 07:01:35.436769 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-skk4j"] Dec 06 07:01:35 crc kubenswrapper[4823]: I1206 07:01:35.492389 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skk4j" event={"ID":"acd2b596-5f29-44a7-9946-5027a36dd330","Type":"ContainerStarted","Data":"93283cf28a845e2d1c0455e1441419f41f4047818f05d52a0dbdc951a9819b60"} Dec 06 07:01:37 crc kubenswrapper[4823]: I1206 07:01:37.510324 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skk4j" event={"ID":"acd2b596-5f29-44a7-9946-5027a36dd330","Type":"ContainerStarted","Data":"b4db50f3d715b475b6bf6859f5a0b194d1378ee0c7e4e040a445045f3e25d79a"} Dec 06 07:01:37 crc kubenswrapper[4823]: I1206 07:01:37.526817 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skk4j" podStartSLOduration=2.410716706 podStartE2EDuration="3.526796059s" podCreationTimestamp="2025-12-06 07:01:34 +0000 UTC" firstStartedPulling="2025-12-06 07:01:35.438886165 +0000 UTC m=+2196.724638125" lastFinishedPulling="2025-12-06 07:01:36.554965518 +0000 UTC m=+2197.840717478" observedRunningTime="2025-12-06 07:01:37.524002607 +0000 UTC m=+2198.809754567" watchObservedRunningTime="2025-12-06 07:01:37.526796059 +0000 UTC m=+2198.812548019" Dec 06 07:01:45 crc kubenswrapper[4823]: I1206 07:01:45.590511 4823 generic.go:334] "Generic (PLEG): container finished" podID="acd2b596-5f29-44a7-9946-5027a36dd330" containerID="b4db50f3d715b475b6bf6859f5a0b194d1378ee0c7e4e040a445045f3e25d79a" exitCode=0 Dec 06 07:01:45 crc kubenswrapper[4823]: I1206 07:01:45.590603 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skk4j" event={"ID":"acd2b596-5f29-44a7-9946-5027a36dd330","Type":"ContainerDied","Data":"b4db50f3d715b475b6bf6859f5a0b194d1378ee0c7e4e040a445045f3e25d79a"} Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.064646 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skk4j" Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.161289 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acd2b596-5f29-44a7-9946-5027a36dd330-ssh-key\") pod \"acd2b596-5f29-44a7-9946-5027a36dd330\" (UID: \"acd2b596-5f29-44a7-9946-5027a36dd330\") " Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.161493 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpfl9\" (UniqueName: \"kubernetes.io/projected/acd2b596-5f29-44a7-9946-5027a36dd330-kube-api-access-hpfl9\") pod \"acd2b596-5f29-44a7-9946-5027a36dd330\" (UID: \"acd2b596-5f29-44a7-9946-5027a36dd330\") " Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.161583 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acd2b596-5f29-44a7-9946-5027a36dd330-inventory\") pod \"acd2b596-5f29-44a7-9946-5027a36dd330\" (UID: \"acd2b596-5f29-44a7-9946-5027a36dd330\") " Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.166679 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acd2b596-5f29-44a7-9946-5027a36dd330-kube-api-access-hpfl9" (OuterVolumeSpecName: "kube-api-access-hpfl9") pod "acd2b596-5f29-44a7-9946-5027a36dd330" (UID: "acd2b596-5f29-44a7-9946-5027a36dd330"). InnerVolumeSpecName "kube-api-access-hpfl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.190871 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd2b596-5f29-44a7-9946-5027a36dd330-inventory" (OuterVolumeSpecName: "inventory") pod "acd2b596-5f29-44a7-9946-5027a36dd330" (UID: "acd2b596-5f29-44a7-9946-5027a36dd330"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.190931 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd2b596-5f29-44a7-9946-5027a36dd330-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "acd2b596-5f29-44a7-9946-5027a36dd330" (UID: "acd2b596-5f29-44a7-9946-5027a36dd330"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.264542 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acd2b596-5f29-44a7-9946-5027a36dd330-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.264571 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acd2b596-5f29-44a7-9946-5027a36dd330-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.264580 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpfl9\" (UniqueName: \"kubernetes.io/projected/acd2b596-5f29-44a7-9946-5027a36dd330-kube-api-access-hpfl9\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.610981 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skk4j" event={"ID":"acd2b596-5f29-44a7-9946-5027a36dd330","Type":"ContainerDied","Data":"93283cf28a845e2d1c0455e1441419f41f4047818f05d52a0dbdc951a9819b60"} Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.611027 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93283cf28a845e2d1c0455e1441419f41f4047818f05d52a0dbdc951a9819b60" Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.611055 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skk4j" Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.680063 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg"] Dec 06 07:01:47 crc kubenswrapper[4823]: E1206 07:01:47.680473 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acd2b596-5f29-44a7-9946-5027a36dd330" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.680490 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="acd2b596-5f29-44a7-9946-5027a36dd330" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.680710 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="acd2b596-5f29-44a7-9946-5027a36dd330" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.681376 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg" Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.684270 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.684279 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.684523 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.684530 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xqh9k" Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.691018 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg"] Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.777171 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4591025-d216-4f7e-8054-7f9cfcc90bfd-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg\" (UID: \"e4591025-d216-4f7e-8054-7f9cfcc90bfd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg" Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.777231 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klcn9\" (UniqueName: \"kubernetes.io/projected/e4591025-d216-4f7e-8054-7f9cfcc90bfd-kube-api-access-klcn9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg\" (UID: \"e4591025-d216-4f7e-8054-7f9cfcc90bfd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg" Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.777349 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e4591025-d216-4f7e-8054-7f9cfcc90bfd-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg\" (UID: \"e4591025-d216-4f7e-8054-7f9cfcc90bfd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg" Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.879700 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e4591025-d216-4f7e-8054-7f9cfcc90bfd-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg\" (UID: \"e4591025-d216-4f7e-8054-7f9cfcc90bfd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg" Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.879898 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4591025-d216-4f7e-8054-7f9cfcc90bfd-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg\" (UID: \"e4591025-d216-4f7e-8054-7f9cfcc90bfd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg" Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.879936 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klcn9\" (UniqueName: \"kubernetes.io/projected/e4591025-d216-4f7e-8054-7f9cfcc90bfd-kube-api-access-klcn9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg\" (UID: \"e4591025-d216-4f7e-8054-7f9cfcc90bfd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg" Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.885229 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4591025-d216-4f7e-8054-7f9cfcc90bfd-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg\" (UID: \"e4591025-d216-4f7e-8054-7f9cfcc90bfd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg" Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.885603 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e4591025-d216-4f7e-8054-7f9cfcc90bfd-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg\" (UID: \"e4591025-d216-4f7e-8054-7f9cfcc90bfd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg" Dec 06 07:01:47 crc kubenswrapper[4823]: I1206 07:01:47.900112 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klcn9\" (UniqueName: \"kubernetes.io/projected/e4591025-d216-4f7e-8054-7f9cfcc90bfd-kube-api-access-klcn9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg\" (UID: \"e4591025-d216-4f7e-8054-7f9cfcc90bfd\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg" Dec 06 07:01:48 crc kubenswrapper[4823]: I1206 07:01:48.007085 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg" Dec 06 07:01:48 crc kubenswrapper[4823]: I1206 07:01:48.516328 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg"] Dec 06 07:01:48 crc kubenswrapper[4823]: W1206 07:01:48.516478 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4591025_d216_4f7e_8054_7f9cfcc90bfd.slice/crio-bac24a366d3099730845e35c604e8f13c3168a60e2e4f77479def6e08eb44e02 WatchSource:0}: Error finding container bac24a366d3099730845e35c604e8f13c3168a60e2e4f77479def6e08eb44e02: Status 404 returned error can't find the container with id bac24a366d3099730845e35c604e8f13c3168a60e2e4f77479def6e08eb44e02 Dec 06 07:01:48 crc kubenswrapper[4823]: I1206 07:01:48.621462 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg" event={"ID":"e4591025-d216-4f7e-8054-7f9cfcc90bfd","Type":"ContainerStarted","Data":"bac24a366d3099730845e35c604e8f13c3168a60e2e4f77479def6e08eb44e02"} Dec 06 07:01:49 crc kubenswrapper[4823]: I1206 07:01:49.633083 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg" event={"ID":"e4591025-d216-4f7e-8054-7f9cfcc90bfd","Type":"ContainerStarted","Data":"d17d4c695b6e65a2f5a7bad00857738a617cfbbb7829339b1a18c041e041621c"} Dec 06 07:01:49 crc kubenswrapper[4823]: I1206 07:01:49.654554 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg" podStartSLOduration=2.244278161 podStartE2EDuration="2.654532967s" podCreationTimestamp="2025-12-06 07:01:47 +0000 UTC" firstStartedPulling="2025-12-06 07:01:48.520102641 +0000 UTC m=+2209.805854601" lastFinishedPulling="2025-12-06 07:01:48.930357447 +0000 UTC m=+2210.216109407" observedRunningTime="2025-12-06 07:01:49.648238054 +0000 UTC m=+2210.933990014" watchObservedRunningTime="2025-12-06 07:01:49.654532967 +0000 UTC m=+2210.940284927" Dec 06 07:01:59 crc kubenswrapper[4823]: I1206 07:01:59.722228 4823 generic.go:334] "Generic (PLEG): container finished" podID="e4591025-d216-4f7e-8054-7f9cfcc90bfd" containerID="d17d4c695b6e65a2f5a7bad00857738a617cfbbb7829339b1a18c041e041621c" exitCode=0 Dec 06 07:01:59 crc kubenswrapper[4823]: I1206 07:01:59.722412 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg" event={"ID":"e4591025-d216-4f7e-8054-7f9cfcc90bfd","Type":"ContainerDied","Data":"d17d4c695b6e65a2f5a7bad00857738a617cfbbb7829339b1a18c041e041621c"} Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.142342 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.238062 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4591025-d216-4f7e-8054-7f9cfcc90bfd-inventory\") pod \"e4591025-d216-4f7e-8054-7f9cfcc90bfd\" (UID: \"e4591025-d216-4f7e-8054-7f9cfcc90bfd\") " Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.238246 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klcn9\" (UniqueName: \"kubernetes.io/projected/e4591025-d216-4f7e-8054-7f9cfcc90bfd-kube-api-access-klcn9\") pod \"e4591025-d216-4f7e-8054-7f9cfcc90bfd\" (UID: \"e4591025-d216-4f7e-8054-7f9cfcc90bfd\") " Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.238316 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e4591025-d216-4f7e-8054-7f9cfcc90bfd-ssh-key\") pod \"e4591025-d216-4f7e-8054-7f9cfcc90bfd\" (UID: \"e4591025-d216-4f7e-8054-7f9cfcc90bfd\") " Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.244235 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4591025-d216-4f7e-8054-7f9cfcc90bfd-kube-api-access-klcn9" (OuterVolumeSpecName: "kube-api-access-klcn9") pod "e4591025-d216-4f7e-8054-7f9cfcc90bfd" (UID: "e4591025-d216-4f7e-8054-7f9cfcc90bfd"). InnerVolumeSpecName "kube-api-access-klcn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.274134 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4591025-d216-4f7e-8054-7f9cfcc90bfd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e4591025-d216-4f7e-8054-7f9cfcc90bfd" (UID: "e4591025-d216-4f7e-8054-7f9cfcc90bfd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.282471 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4591025-d216-4f7e-8054-7f9cfcc90bfd-inventory" (OuterVolumeSpecName: "inventory") pod "e4591025-d216-4f7e-8054-7f9cfcc90bfd" (UID: "e4591025-d216-4f7e-8054-7f9cfcc90bfd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.342243 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klcn9\" (UniqueName: \"kubernetes.io/projected/e4591025-d216-4f7e-8054-7f9cfcc90bfd-kube-api-access-klcn9\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.342281 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e4591025-d216-4f7e-8054-7f9cfcc90bfd-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.342299 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4591025-d216-4f7e-8054-7f9cfcc90bfd-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.742080 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg" event={"ID":"e4591025-d216-4f7e-8054-7f9cfcc90bfd","Type":"ContainerDied","Data":"bac24a366d3099730845e35c604e8f13c3168a60e2e4f77479def6e08eb44e02"} Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.742377 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bac24a366d3099730845e35c604e8f13c3168a60e2e4f77479def6e08eb44e02" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.742151 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.840997 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f"] Dec 06 07:02:01 crc kubenswrapper[4823]: E1206 07:02:01.841516 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4591025-d216-4f7e-8054-7f9cfcc90bfd" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.841540 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4591025-d216-4f7e-8054-7f9cfcc90bfd" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.841861 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4591025-d216-4f7e-8054-7f9cfcc90bfd" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.842730 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.847203 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.847497 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.847753 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xqh9k" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.847495 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.847502 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.847530 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.847550 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.848566 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.854923 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f"] Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.954085 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.954177 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.954212 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.954268 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.954294 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl7z4\" (UniqueName: \"kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-kube-api-access-vl7z4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.954348 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.954376 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.954400 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.954462 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.954493 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.954563 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.954610 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.954651 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:01 crc kubenswrapper[4823]: I1206 07:02:01.954701 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.056499 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.057067 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.057176 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.057273 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.057350 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.057458 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.057558 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.057640 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.057795 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.058149 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl7z4\" (UniqueName: \"kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-kube-api-access-vl7z4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.058385 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.058599 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.059075 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.059341 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.060441 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.061045 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.063777 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.063916 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.064168 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.065579 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.065947 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.066064 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.066891 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.067020 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.067196 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.067680 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.071165 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.077549 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl7z4\" (UniqueName: \"kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-kube-api-access-vl7z4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.173451 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.706925 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f"] Dec 06 07:02:02 crc kubenswrapper[4823]: I1206 07:02:02.750063 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" event={"ID":"908d817e-af62-4f73-a91d-c005192b813c","Type":"ContainerStarted","Data":"cc73ce0c0f77d43b1358c42b243d628e62f06b58c05dedb2fa916120a1695d9d"} Dec 06 07:02:03 crc kubenswrapper[4823]: I1206 07:02:03.762518 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" event={"ID":"908d817e-af62-4f73-a91d-c005192b813c","Type":"ContainerStarted","Data":"6810105cd79af6def9acd6743d744bef296a7f11fd04033f9106285acfb37a02"} Dec 06 07:02:03 crc kubenswrapper[4823]: I1206 07:02:03.793370 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" podStartSLOduration=2.356079595 podStartE2EDuration="2.793343636s" podCreationTimestamp="2025-12-06 07:02:01 +0000 UTC" firstStartedPulling="2025-12-06 07:02:02.708963065 +0000 UTC m=+2223.994715025" lastFinishedPulling="2025-12-06 07:02:03.146227106 +0000 UTC m=+2224.431979066" observedRunningTime="2025-12-06 07:02:03.78933097 +0000 UTC m=+2225.075082940" watchObservedRunningTime="2025-12-06 07:02:03.793343636 +0000 UTC m=+2225.079095596" Dec 06 07:02:06 crc kubenswrapper[4823]: I1206 07:02:06.051731 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:02:06 crc kubenswrapper[4823]: I1206 07:02:06.052056 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:02:36 crc kubenswrapper[4823]: I1206 07:02:36.052571 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:02:36 crc kubenswrapper[4823]: I1206 07:02:36.053140 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:02:43 crc kubenswrapper[4823]: I1206 07:02:43.140310 4823 generic.go:334] "Generic (PLEG): container finished" podID="908d817e-af62-4f73-a91d-c005192b813c" containerID="6810105cd79af6def9acd6743d744bef296a7f11fd04033f9106285acfb37a02" exitCode=0 Dec 06 07:02:43 crc kubenswrapper[4823]: I1206 07:02:43.154337 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" event={"ID":"908d817e-af62-4f73-a91d-c005192b813c","Type":"ContainerDied","Data":"6810105cd79af6def9acd6743d744bef296a7f11fd04033f9106285acfb37a02"} Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.563378 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.652261 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"908d817e-af62-4f73-a91d-c005192b813c\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.658381 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "908d817e-af62-4f73-a91d-c005192b813c" (UID: "908d817e-af62-4f73-a91d-c005192b813c"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.753924 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-nova-combined-ca-bundle\") pod \"908d817e-af62-4f73-a91d-c005192b813c\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.754025 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-inventory\") pod \"908d817e-af62-4f73-a91d-c005192b813c\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.754070 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-telemetry-combined-ca-bundle\") pod \"908d817e-af62-4f73-a91d-c005192b813c\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.754121 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-ovn-combined-ca-bundle\") pod \"908d817e-af62-4f73-a91d-c005192b813c\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.754201 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-libvirt-combined-ca-bundle\") pod \"908d817e-af62-4f73-a91d-c005192b813c\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.754259 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-repo-setup-combined-ca-bundle\") pod \"908d817e-af62-4f73-a91d-c005192b813c\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.754312 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-bootstrap-combined-ca-bundle\") pod \"908d817e-af62-4f73-a91d-c005192b813c\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.754374 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-ssh-key\") pod \"908d817e-af62-4f73-a91d-c005192b813c\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.754398 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl7z4\" (UniqueName: \"kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-kube-api-access-vl7z4\") pod \"908d817e-af62-4f73-a91d-c005192b813c\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.754457 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"908d817e-af62-4f73-a91d-c005192b813c\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.754485 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"908d817e-af62-4f73-a91d-c005192b813c\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.754538 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-neutron-metadata-combined-ca-bundle\") pod \"908d817e-af62-4f73-a91d-c005192b813c\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.754576 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"908d817e-af62-4f73-a91d-c005192b813c\" (UID: \"908d817e-af62-4f73-a91d-c005192b813c\") " Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.755149 4823 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.759759 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "908d817e-af62-4f73-a91d-c005192b813c" (UID: "908d817e-af62-4f73-a91d-c005192b813c"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.759797 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "908d817e-af62-4f73-a91d-c005192b813c" (UID: "908d817e-af62-4f73-a91d-c005192b813c"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.760789 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "908d817e-af62-4f73-a91d-c005192b813c" (UID: "908d817e-af62-4f73-a91d-c005192b813c"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.761503 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "908d817e-af62-4f73-a91d-c005192b813c" (UID: "908d817e-af62-4f73-a91d-c005192b813c"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.761743 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-kube-api-access-vl7z4" (OuterVolumeSpecName: "kube-api-access-vl7z4") pod "908d817e-af62-4f73-a91d-c005192b813c" (UID: "908d817e-af62-4f73-a91d-c005192b813c"). InnerVolumeSpecName "kube-api-access-vl7z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.761963 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "908d817e-af62-4f73-a91d-c005192b813c" (UID: "908d817e-af62-4f73-a91d-c005192b813c"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.762095 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "908d817e-af62-4f73-a91d-c005192b813c" (UID: "908d817e-af62-4f73-a91d-c005192b813c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.764360 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "908d817e-af62-4f73-a91d-c005192b813c" (UID: "908d817e-af62-4f73-a91d-c005192b813c"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.765874 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "908d817e-af62-4f73-a91d-c005192b813c" (UID: "908d817e-af62-4f73-a91d-c005192b813c"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.773248 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "908d817e-af62-4f73-a91d-c005192b813c" (UID: "908d817e-af62-4f73-a91d-c005192b813c"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.774002 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "908d817e-af62-4f73-a91d-c005192b813c" (UID: "908d817e-af62-4f73-a91d-c005192b813c"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.787245 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-inventory" (OuterVolumeSpecName: "inventory") pod "908d817e-af62-4f73-a91d-c005192b813c" (UID: "908d817e-af62-4f73-a91d-c005192b813c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.796242 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "908d817e-af62-4f73-a91d-c005192b813c" (UID: "908d817e-af62-4f73-a91d-c005192b813c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.857436 4823 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.857476 4823 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.857488 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.857497 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl7z4\" (UniqueName: \"kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-kube-api-access-vl7z4\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.857509 4823 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.857519 4823 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.857530 4823 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.857574 4823 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908d817e-af62-4f73-a91d-c005192b813c-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.857585 4823 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.857593 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.857601 4823 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.857610 4823 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:44 crc kubenswrapper[4823]: I1206 07:02:44.857619 4823 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908d817e-af62-4f73-a91d-c005192b813c-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.161595 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" event={"ID":"908d817e-af62-4f73-a91d-c005192b813c","Type":"ContainerDied","Data":"cc73ce0c0f77d43b1358c42b243d628e62f06b58c05dedb2fa916120a1695d9d"} Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.161670 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.161690 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc73ce0c0f77d43b1358c42b243d628e62f06b58c05dedb2fa916120a1695d9d" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.279724 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6"] Dec 06 07:02:45 crc kubenswrapper[4823]: E1206 07:02:45.280475 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="908d817e-af62-4f73-a91d-c005192b813c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.280496 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="908d817e-af62-4f73-a91d-c005192b813c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.280727 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="908d817e-af62-4f73-a91d-c005192b813c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.281477 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.284055 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.284055 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.284111 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.284284 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.287096 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xqh9k" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.289433 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6"] Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.366415 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a1945af-9fc9-4571-bd52-c93277ed8c64-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gpxd6\" (UID: \"8a1945af-9fc9-4571-bd52-c93277ed8c64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.366531 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8a1945af-9fc9-4571-bd52-c93277ed8c64-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gpxd6\" (UID: \"8a1945af-9fc9-4571-bd52-c93277ed8c64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.366623 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdlvc\" (UniqueName: \"kubernetes.io/projected/8a1945af-9fc9-4571-bd52-c93277ed8c64-kube-api-access-kdlvc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gpxd6\" (UID: \"8a1945af-9fc9-4571-bd52-c93277ed8c64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.366815 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a1945af-9fc9-4571-bd52-c93277ed8c64-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gpxd6\" (UID: \"8a1945af-9fc9-4571-bd52-c93277ed8c64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.366852 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8a1945af-9fc9-4571-bd52-c93277ed8c64-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gpxd6\" (UID: \"8a1945af-9fc9-4571-bd52-c93277ed8c64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.468229 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdlvc\" (UniqueName: \"kubernetes.io/projected/8a1945af-9fc9-4571-bd52-c93277ed8c64-kube-api-access-kdlvc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gpxd6\" (UID: \"8a1945af-9fc9-4571-bd52-c93277ed8c64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.468367 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a1945af-9fc9-4571-bd52-c93277ed8c64-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gpxd6\" (UID: \"8a1945af-9fc9-4571-bd52-c93277ed8c64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.468404 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8a1945af-9fc9-4571-bd52-c93277ed8c64-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gpxd6\" (UID: \"8a1945af-9fc9-4571-bd52-c93277ed8c64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.468462 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a1945af-9fc9-4571-bd52-c93277ed8c64-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gpxd6\" (UID: \"8a1945af-9fc9-4571-bd52-c93277ed8c64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.468534 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8a1945af-9fc9-4571-bd52-c93277ed8c64-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gpxd6\" (UID: \"8a1945af-9fc9-4571-bd52-c93277ed8c64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.469585 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8a1945af-9fc9-4571-bd52-c93277ed8c64-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gpxd6\" (UID: \"8a1945af-9fc9-4571-bd52-c93277ed8c64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.473398 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a1945af-9fc9-4571-bd52-c93277ed8c64-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gpxd6\" (UID: \"8a1945af-9fc9-4571-bd52-c93277ed8c64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.475325 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a1945af-9fc9-4571-bd52-c93277ed8c64-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gpxd6\" (UID: \"8a1945af-9fc9-4571-bd52-c93277ed8c64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.480362 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8a1945af-9fc9-4571-bd52-c93277ed8c64-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gpxd6\" (UID: \"8a1945af-9fc9-4571-bd52-c93277ed8c64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.492633 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdlvc\" (UniqueName: \"kubernetes.io/projected/8a1945af-9fc9-4571-bd52-c93277ed8c64-kube-api-access-kdlvc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gpxd6\" (UID: \"8a1945af-9fc9-4571-bd52-c93277ed8c64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6" Dec 06 07:02:45 crc kubenswrapper[4823]: I1206 07:02:45.599318 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6" Dec 06 07:02:46 crc kubenswrapper[4823]: I1206 07:02:46.160597 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6"] Dec 06 07:02:46 crc kubenswrapper[4823]: I1206 07:02:46.173569 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6" event={"ID":"8a1945af-9fc9-4571-bd52-c93277ed8c64","Type":"ContainerStarted","Data":"bad2acd950496b8a7b1b0626a2274ef827ccd1700561855811b88a6b0d115e9a"} Dec 06 07:02:47 crc kubenswrapper[4823]: I1206 07:02:47.184373 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6" event={"ID":"8a1945af-9fc9-4571-bd52-c93277ed8c64","Type":"ContainerStarted","Data":"236a11c146128fbb31002779f13022106b2b2eab1f0bfb2ae1b014d33526c948"} Dec 06 07:02:47 crc kubenswrapper[4823]: I1206 07:02:47.207287 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6" podStartSLOduration=1.77569881 podStartE2EDuration="2.20725735s" podCreationTimestamp="2025-12-06 07:02:45 +0000 UTC" firstStartedPulling="2025-12-06 07:02:46.164172759 +0000 UTC m=+2267.449924719" lastFinishedPulling="2025-12-06 07:02:46.595731299 +0000 UTC m=+2267.881483259" observedRunningTime="2025-12-06 07:02:47.200625326 +0000 UTC m=+2268.486377286" watchObservedRunningTime="2025-12-06 07:02:47.20725735 +0000 UTC m=+2268.493009300" Dec 06 07:02:56 crc kubenswrapper[4823]: I1206 07:02:56.677137 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k8jkj"] Dec 06 07:02:56 crc kubenswrapper[4823]: I1206 07:02:56.680738 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k8jkj" Dec 06 07:02:56 crc kubenswrapper[4823]: I1206 07:02:56.686253 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k8jkj"] Dec 06 07:02:56 crc kubenswrapper[4823]: I1206 07:02:56.785334 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z9ps\" (UniqueName: \"kubernetes.io/projected/413b85b7-7058-4f89-a37f-08e017b64898-kube-api-access-9z9ps\") pod \"certified-operators-k8jkj\" (UID: \"413b85b7-7058-4f89-a37f-08e017b64898\") " pod="openshift-marketplace/certified-operators-k8jkj" Dec 06 07:02:56 crc kubenswrapper[4823]: I1206 07:02:56.785395 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/413b85b7-7058-4f89-a37f-08e017b64898-utilities\") pod \"certified-operators-k8jkj\" (UID: \"413b85b7-7058-4f89-a37f-08e017b64898\") " pod="openshift-marketplace/certified-operators-k8jkj" Dec 06 07:02:56 crc kubenswrapper[4823]: I1206 07:02:56.785465 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/413b85b7-7058-4f89-a37f-08e017b64898-catalog-content\") pod \"certified-operators-k8jkj\" (UID: \"413b85b7-7058-4f89-a37f-08e017b64898\") " pod="openshift-marketplace/certified-operators-k8jkj" Dec 06 07:02:56 crc kubenswrapper[4823]: I1206 07:02:56.887226 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/413b85b7-7058-4f89-a37f-08e017b64898-catalog-content\") pod \"certified-operators-k8jkj\" (UID: \"413b85b7-7058-4f89-a37f-08e017b64898\") " pod="openshift-marketplace/certified-operators-k8jkj" Dec 06 07:02:56 crc kubenswrapper[4823]: I1206 07:02:56.887394 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z9ps\" (UniqueName: \"kubernetes.io/projected/413b85b7-7058-4f89-a37f-08e017b64898-kube-api-access-9z9ps\") pod \"certified-operators-k8jkj\" (UID: \"413b85b7-7058-4f89-a37f-08e017b64898\") " pod="openshift-marketplace/certified-operators-k8jkj" Dec 06 07:02:56 crc kubenswrapper[4823]: I1206 07:02:56.887419 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/413b85b7-7058-4f89-a37f-08e017b64898-utilities\") pod \"certified-operators-k8jkj\" (UID: \"413b85b7-7058-4f89-a37f-08e017b64898\") " pod="openshift-marketplace/certified-operators-k8jkj" Dec 06 07:02:56 crc kubenswrapper[4823]: I1206 07:02:56.888032 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/413b85b7-7058-4f89-a37f-08e017b64898-utilities\") pod \"certified-operators-k8jkj\" (UID: \"413b85b7-7058-4f89-a37f-08e017b64898\") " pod="openshift-marketplace/certified-operators-k8jkj" Dec 06 07:02:56 crc kubenswrapper[4823]: I1206 07:02:56.888050 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/413b85b7-7058-4f89-a37f-08e017b64898-catalog-content\") pod \"certified-operators-k8jkj\" (UID: \"413b85b7-7058-4f89-a37f-08e017b64898\") " pod="openshift-marketplace/certified-operators-k8jkj" Dec 06 07:02:56 crc kubenswrapper[4823]: I1206 07:02:56.907707 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z9ps\" (UniqueName: \"kubernetes.io/projected/413b85b7-7058-4f89-a37f-08e017b64898-kube-api-access-9z9ps\") pod \"certified-operators-k8jkj\" (UID: \"413b85b7-7058-4f89-a37f-08e017b64898\") " pod="openshift-marketplace/certified-operators-k8jkj" Dec 06 07:02:57 crc kubenswrapper[4823]: I1206 07:02:57.039741 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k8jkj" Dec 06 07:02:57 crc kubenswrapper[4823]: I1206 07:02:57.670300 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k8jkj"] Dec 06 07:02:58 crc kubenswrapper[4823]: I1206 07:02:58.282684 4823 generic.go:334] "Generic (PLEG): container finished" podID="413b85b7-7058-4f89-a37f-08e017b64898" containerID="be6e3a902496fa9adf7a962cefb970fc12710191daccc0143eb743ec1d5c74c6" exitCode=0 Dec 06 07:02:58 crc kubenswrapper[4823]: I1206 07:02:58.282734 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8jkj" event={"ID":"413b85b7-7058-4f89-a37f-08e017b64898","Type":"ContainerDied","Data":"be6e3a902496fa9adf7a962cefb970fc12710191daccc0143eb743ec1d5c74c6"} Dec 06 07:02:58 crc kubenswrapper[4823]: I1206 07:02:58.283177 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8jkj" event={"ID":"413b85b7-7058-4f89-a37f-08e017b64898","Type":"ContainerStarted","Data":"b6b91326d8f8e3f1ced916ebd8d5b99d982b8d984cca2af1ddf22a8cae73ff7b"} Dec 06 07:02:58 crc kubenswrapper[4823]: I1206 07:02:58.285393 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 07:02:59 crc kubenswrapper[4823]: I1206 07:02:59.294813 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8jkj" event={"ID":"413b85b7-7058-4f89-a37f-08e017b64898","Type":"ContainerStarted","Data":"6de90b1c874d44df26d76783f65a68a611fff830f3a04259793dc31e603bc324"} Dec 06 07:03:00 crc kubenswrapper[4823]: I1206 07:03:00.331815 4823 generic.go:334] "Generic (PLEG): container finished" podID="413b85b7-7058-4f89-a37f-08e017b64898" containerID="6de90b1c874d44df26d76783f65a68a611fff830f3a04259793dc31e603bc324" exitCode=0 Dec 06 07:03:00 crc kubenswrapper[4823]: I1206 07:03:00.331864 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8jkj" event={"ID":"413b85b7-7058-4f89-a37f-08e017b64898","Type":"ContainerDied","Data":"6de90b1c874d44df26d76783f65a68a611fff830f3a04259793dc31e603bc324"} Dec 06 07:03:01 crc kubenswrapper[4823]: I1206 07:03:01.344994 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8jkj" event={"ID":"413b85b7-7058-4f89-a37f-08e017b64898","Type":"ContainerStarted","Data":"210dac1515dbea99426faee9fc9375d33c38fd8f9465d8649a2d4eaf5ddb8eeb"} Dec 06 07:03:01 crc kubenswrapper[4823]: I1206 07:03:01.375340 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k8jkj" podStartSLOduration=2.566867261 podStartE2EDuration="5.375315976s" podCreationTimestamp="2025-12-06 07:02:56 +0000 UTC" firstStartedPulling="2025-12-06 07:02:58.285109721 +0000 UTC m=+2279.570861681" lastFinishedPulling="2025-12-06 07:03:01.093558436 +0000 UTC m=+2282.379310396" observedRunningTime="2025-12-06 07:03:01.362556164 +0000 UTC m=+2282.648308134" watchObservedRunningTime="2025-12-06 07:03:01.375315976 +0000 UTC m=+2282.661067936" Dec 06 07:03:06 crc kubenswrapper[4823]: I1206 07:03:06.052453 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:03:06 crc kubenswrapper[4823]: I1206 07:03:06.053038 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:03:06 crc kubenswrapper[4823]: I1206 07:03:06.053715 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 07:03:06 crc kubenswrapper[4823]: I1206 07:03:06.054546 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e"} pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:03:06 crc kubenswrapper[4823]: I1206 07:03:06.054617 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" containerID="cri-o://1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" gracePeriod=600 Dec 06 07:03:06 crc kubenswrapper[4823]: E1206 07:03:06.678819 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:03:07 crc kubenswrapper[4823]: I1206 07:03:07.040488 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k8jkj" Dec 06 07:03:07 crc kubenswrapper[4823]: I1206 07:03:07.040554 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k8jkj" Dec 06 07:03:07 crc kubenswrapper[4823]: I1206 07:03:07.086412 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k8jkj" Dec 06 07:03:07 crc kubenswrapper[4823]: I1206 07:03:07.410541 4823 generic.go:334] "Generic (PLEG): container finished" podID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" exitCode=0 Dec 06 07:03:07 crc kubenswrapper[4823]: I1206 07:03:07.410629 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerDied","Data":"1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e"} Dec 06 07:03:07 crc kubenswrapper[4823]: I1206 07:03:07.410728 4823 scope.go:117] "RemoveContainer" containerID="e5d2ae4a22e402696798d2d26ede0bf777dfb9593268a6c6415aab7996e8a81d" Dec 06 07:03:07 crc kubenswrapper[4823]: I1206 07:03:07.411304 4823 scope.go:117] "RemoveContainer" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" Dec 06 07:03:07 crc kubenswrapper[4823]: E1206 07:03:07.411582 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:03:07 crc kubenswrapper[4823]: I1206 07:03:07.490861 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k8jkj" Dec 06 07:03:07 crc kubenswrapper[4823]: I1206 07:03:07.556116 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k8jkj"] Dec 06 07:03:09 crc kubenswrapper[4823]: I1206 07:03:09.428606 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k8jkj" podUID="413b85b7-7058-4f89-a37f-08e017b64898" containerName="registry-server" containerID="cri-o://210dac1515dbea99426faee9fc9375d33c38fd8f9465d8649a2d4eaf5ddb8eeb" gracePeriod=2 Dec 06 07:03:11 crc kubenswrapper[4823]: I1206 07:03:11.247729 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k8jkj" Dec 06 07:03:11 crc kubenswrapper[4823]: I1206 07:03:11.262238 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/413b85b7-7058-4f89-a37f-08e017b64898-utilities\") pod \"413b85b7-7058-4f89-a37f-08e017b64898\" (UID: \"413b85b7-7058-4f89-a37f-08e017b64898\") " Dec 06 07:03:11 crc kubenswrapper[4823]: I1206 07:03:11.262315 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z9ps\" (UniqueName: \"kubernetes.io/projected/413b85b7-7058-4f89-a37f-08e017b64898-kube-api-access-9z9ps\") pod \"413b85b7-7058-4f89-a37f-08e017b64898\" (UID: \"413b85b7-7058-4f89-a37f-08e017b64898\") " Dec 06 07:03:11 crc kubenswrapper[4823]: I1206 07:03:11.262414 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/413b85b7-7058-4f89-a37f-08e017b64898-catalog-content\") pod \"413b85b7-7058-4f89-a37f-08e017b64898\" (UID: \"413b85b7-7058-4f89-a37f-08e017b64898\") " Dec 06 07:03:11 crc kubenswrapper[4823]: I1206 07:03:11.263345 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/413b85b7-7058-4f89-a37f-08e017b64898-utilities" (OuterVolumeSpecName: "utilities") pod "413b85b7-7058-4f89-a37f-08e017b64898" (UID: "413b85b7-7058-4f89-a37f-08e017b64898"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:03:11 crc kubenswrapper[4823]: I1206 07:03:11.268758 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/413b85b7-7058-4f89-a37f-08e017b64898-kube-api-access-9z9ps" (OuterVolumeSpecName: "kube-api-access-9z9ps") pod "413b85b7-7058-4f89-a37f-08e017b64898" (UID: "413b85b7-7058-4f89-a37f-08e017b64898"). InnerVolumeSpecName "kube-api-access-9z9ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:03:11 crc kubenswrapper[4823]: I1206 07:03:11.316649 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/413b85b7-7058-4f89-a37f-08e017b64898-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "413b85b7-7058-4f89-a37f-08e017b64898" (UID: "413b85b7-7058-4f89-a37f-08e017b64898"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:03:11 crc kubenswrapper[4823]: I1206 07:03:11.364896 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/413b85b7-7058-4f89-a37f-08e017b64898-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:11 crc kubenswrapper[4823]: I1206 07:03:11.365132 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/413b85b7-7058-4f89-a37f-08e017b64898-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:11 crc kubenswrapper[4823]: I1206 07:03:11.365225 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z9ps\" (UniqueName: \"kubernetes.io/projected/413b85b7-7058-4f89-a37f-08e017b64898-kube-api-access-9z9ps\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:11 crc kubenswrapper[4823]: I1206 07:03:11.447685 4823 generic.go:334] "Generic (PLEG): container finished" podID="413b85b7-7058-4f89-a37f-08e017b64898" containerID="210dac1515dbea99426faee9fc9375d33c38fd8f9465d8649a2d4eaf5ddb8eeb" exitCode=0 Dec 06 07:03:11 crc kubenswrapper[4823]: I1206 07:03:11.447742 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k8jkj" Dec 06 07:03:11 crc kubenswrapper[4823]: I1206 07:03:11.447788 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8jkj" event={"ID":"413b85b7-7058-4f89-a37f-08e017b64898","Type":"ContainerDied","Data":"210dac1515dbea99426faee9fc9375d33c38fd8f9465d8649a2d4eaf5ddb8eeb"} Dec 06 07:03:11 crc kubenswrapper[4823]: I1206 07:03:11.448224 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8jkj" event={"ID":"413b85b7-7058-4f89-a37f-08e017b64898","Type":"ContainerDied","Data":"b6b91326d8f8e3f1ced916ebd8d5b99d982b8d984cca2af1ddf22a8cae73ff7b"} Dec 06 07:03:11 crc kubenswrapper[4823]: I1206 07:03:11.448249 4823 scope.go:117] "RemoveContainer" containerID="210dac1515dbea99426faee9fc9375d33c38fd8f9465d8649a2d4eaf5ddb8eeb" Dec 06 07:03:11 crc kubenswrapper[4823]: I1206 07:03:11.471607 4823 scope.go:117] "RemoveContainer" containerID="6de90b1c874d44df26d76783f65a68a611fff830f3a04259793dc31e603bc324" Dec 06 07:03:11 crc kubenswrapper[4823]: I1206 07:03:11.487210 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k8jkj"] Dec 06 07:03:11 crc kubenswrapper[4823]: I1206 07:03:11.506280 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k8jkj"] Dec 06 07:03:11 crc kubenswrapper[4823]: I1206 07:03:11.521321 4823 scope.go:117] "RemoveContainer" containerID="be6e3a902496fa9adf7a962cefb970fc12710191daccc0143eb743ec1d5c74c6" Dec 06 07:03:11 crc kubenswrapper[4823]: I1206 07:03:11.552798 4823 scope.go:117] "RemoveContainer" containerID="210dac1515dbea99426faee9fc9375d33c38fd8f9465d8649a2d4eaf5ddb8eeb" Dec 06 07:03:11 crc kubenswrapper[4823]: E1206 07:03:11.553383 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"210dac1515dbea99426faee9fc9375d33c38fd8f9465d8649a2d4eaf5ddb8eeb\": container with ID starting with 210dac1515dbea99426faee9fc9375d33c38fd8f9465d8649a2d4eaf5ddb8eeb not found: ID does not exist" containerID="210dac1515dbea99426faee9fc9375d33c38fd8f9465d8649a2d4eaf5ddb8eeb" Dec 06 07:03:11 crc kubenswrapper[4823]: I1206 07:03:11.553516 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"210dac1515dbea99426faee9fc9375d33c38fd8f9465d8649a2d4eaf5ddb8eeb"} err="failed to get container status \"210dac1515dbea99426faee9fc9375d33c38fd8f9465d8649a2d4eaf5ddb8eeb\": rpc error: code = NotFound desc = could not find container \"210dac1515dbea99426faee9fc9375d33c38fd8f9465d8649a2d4eaf5ddb8eeb\": container with ID starting with 210dac1515dbea99426faee9fc9375d33c38fd8f9465d8649a2d4eaf5ddb8eeb not found: ID does not exist" Dec 06 07:03:11 crc kubenswrapper[4823]: I1206 07:03:11.553623 4823 scope.go:117] "RemoveContainer" containerID="6de90b1c874d44df26d76783f65a68a611fff830f3a04259793dc31e603bc324" Dec 06 07:03:11 crc kubenswrapper[4823]: E1206 07:03:11.554245 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6de90b1c874d44df26d76783f65a68a611fff830f3a04259793dc31e603bc324\": container with ID starting with 6de90b1c874d44df26d76783f65a68a611fff830f3a04259793dc31e603bc324 not found: ID does not exist" containerID="6de90b1c874d44df26d76783f65a68a611fff830f3a04259793dc31e603bc324" Dec 06 07:03:11 crc kubenswrapper[4823]: I1206 07:03:11.554305 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6de90b1c874d44df26d76783f65a68a611fff830f3a04259793dc31e603bc324"} err="failed to get container status \"6de90b1c874d44df26d76783f65a68a611fff830f3a04259793dc31e603bc324\": rpc error: code = NotFound desc = could not find container \"6de90b1c874d44df26d76783f65a68a611fff830f3a04259793dc31e603bc324\": container with ID starting with 6de90b1c874d44df26d76783f65a68a611fff830f3a04259793dc31e603bc324 not found: ID does not exist" Dec 06 07:03:11 crc kubenswrapper[4823]: I1206 07:03:11.554341 4823 scope.go:117] "RemoveContainer" containerID="be6e3a902496fa9adf7a962cefb970fc12710191daccc0143eb743ec1d5c74c6" Dec 06 07:03:11 crc kubenswrapper[4823]: E1206 07:03:11.555774 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be6e3a902496fa9adf7a962cefb970fc12710191daccc0143eb743ec1d5c74c6\": container with ID starting with be6e3a902496fa9adf7a962cefb970fc12710191daccc0143eb743ec1d5c74c6 not found: ID does not exist" containerID="be6e3a902496fa9adf7a962cefb970fc12710191daccc0143eb743ec1d5c74c6" Dec 06 07:03:11 crc kubenswrapper[4823]: I1206 07:03:11.555823 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be6e3a902496fa9adf7a962cefb970fc12710191daccc0143eb743ec1d5c74c6"} err="failed to get container status \"be6e3a902496fa9adf7a962cefb970fc12710191daccc0143eb743ec1d5c74c6\": rpc error: code = NotFound desc = could not find container \"be6e3a902496fa9adf7a962cefb970fc12710191daccc0143eb743ec1d5c74c6\": container with ID starting with be6e3a902496fa9adf7a962cefb970fc12710191daccc0143eb743ec1d5c74c6 not found: ID does not exist" Dec 06 07:03:13 crc kubenswrapper[4823]: I1206 07:03:13.153605 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="413b85b7-7058-4f89-a37f-08e017b64898" path="/var/lib/kubelet/pods/413b85b7-7058-4f89-a37f-08e017b64898/volumes" Dec 06 07:03:22 crc kubenswrapper[4823]: I1206 07:03:22.140458 4823 scope.go:117] "RemoveContainer" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" Dec 06 07:03:22 crc kubenswrapper[4823]: E1206 07:03:22.142348 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:03:25 crc kubenswrapper[4823]: I1206 07:03:25.907568 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5qc2d"] Dec 06 07:03:25 crc kubenswrapper[4823]: E1206 07:03:25.908614 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="413b85b7-7058-4f89-a37f-08e017b64898" containerName="extract-utilities" Dec 06 07:03:25 crc kubenswrapper[4823]: I1206 07:03:25.908629 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="413b85b7-7058-4f89-a37f-08e017b64898" containerName="extract-utilities" Dec 06 07:03:25 crc kubenswrapper[4823]: E1206 07:03:25.908797 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="413b85b7-7058-4f89-a37f-08e017b64898" containerName="extract-content" Dec 06 07:03:25 crc kubenswrapper[4823]: I1206 07:03:25.908805 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="413b85b7-7058-4f89-a37f-08e017b64898" containerName="extract-content" Dec 06 07:03:25 crc kubenswrapper[4823]: E1206 07:03:25.908817 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="413b85b7-7058-4f89-a37f-08e017b64898" containerName="registry-server" Dec 06 07:03:25 crc kubenswrapper[4823]: I1206 07:03:25.908825 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="413b85b7-7058-4f89-a37f-08e017b64898" containerName="registry-server" Dec 06 07:03:25 crc kubenswrapper[4823]: I1206 07:03:25.909081 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="413b85b7-7058-4f89-a37f-08e017b64898" containerName="registry-server" Dec 06 07:03:25 crc kubenswrapper[4823]: I1206 07:03:25.910693 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5qc2d" Dec 06 07:03:25 crc kubenswrapper[4823]: I1206 07:03:25.918496 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5qc2d"] Dec 06 07:03:26 crc kubenswrapper[4823]: I1206 07:03:26.040646 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68mjc\" (UniqueName: \"kubernetes.io/projected/f080f1e2-edff-46ff-a81a-9cf0f4a8f50d-kube-api-access-68mjc\") pod \"redhat-marketplace-5qc2d\" (UID: \"f080f1e2-edff-46ff-a81a-9cf0f4a8f50d\") " pod="openshift-marketplace/redhat-marketplace-5qc2d" Dec 06 07:03:26 crc kubenswrapper[4823]: I1206 07:03:26.040793 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f080f1e2-edff-46ff-a81a-9cf0f4a8f50d-utilities\") pod \"redhat-marketplace-5qc2d\" (UID: \"f080f1e2-edff-46ff-a81a-9cf0f4a8f50d\") " pod="openshift-marketplace/redhat-marketplace-5qc2d" Dec 06 07:03:26 crc kubenswrapper[4823]: I1206 07:03:26.040867 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f080f1e2-edff-46ff-a81a-9cf0f4a8f50d-catalog-content\") pod \"redhat-marketplace-5qc2d\" (UID: \"f080f1e2-edff-46ff-a81a-9cf0f4a8f50d\") " pod="openshift-marketplace/redhat-marketplace-5qc2d" Dec 06 07:03:26 crc kubenswrapper[4823]: I1206 07:03:26.142195 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68mjc\" (UniqueName: \"kubernetes.io/projected/f080f1e2-edff-46ff-a81a-9cf0f4a8f50d-kube-api-access-68mjc\") pod \"redhat-marketplace-5qc2d\" (UID: \"f080f1e2-edff-46ff-a81a-9cf0f4a8f50d\") " pod="openshift-marketplace/redhat-marketplace-5qc2d" Dec 06 07:03:26 crc kubenswrapper[4823]: I1206 07:03:26.142295 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f080f1e2-edff-46ff-a81a-9cf0f4a8f50d-utilities\") pod \"redhat-marketplace-5qc2d\" (UID: \"f080f1e2-edff-46ff-a81a-9cf0f4a8f50d\") " pod="openshift-marketplace/redhat-marketplace-5qc2d" Dec 06 07:03:26 crc kubenswrapper[4823]: I1206 07:03:26.142334 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f080f1e2-edff-46ff-a81a-9cf0f4a8f50d-catalog-content\") pod \"redhat-marketplace-5qc2d\" (UID: \"f080f1e2-edff-46ff-a81a-9cf0f4a8f50d\") " pod="openshift-marketplace/redhat-marketplace-5qc2d" Dec 06 07:03:26 crc kubenswrapper[4823]: I1206 07:03:26.142939 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f080f1e2-edff-46ff-a81a-9cf0f4a8f50d-utilities\") pod \"redhat-marketplace-5qc2d\" (UID: \"f080f1e2-edff-46ff-a81a-9cf0f4a8f50d\") " pod="openshift-marketplace/redhat-marketplace-5qc2d" Dec 06 07:03:26 crc kubenswrapper[4823]: I1206 07:03:26.142988 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f080f1e2-edff-46ff-a81a-9cf0f4a8f50d-catalog-content\") pod \"redhat-marketplace-5qc2d\" (UID: \"f080f1e2-edff-46ff-a81a-9cf0f4a8f50d\") " pod="openshift-marketplace/redhat-marketplace-5qc2d" Dec 06 07:03:26 crc kubenswrapper[4823]: I1206 07:03:26.168368 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68mjc\" (UniqueName: \"kubernetes.io/projected/f080f1e2-edff-46ff-a81a-9cf0f4a8f50d-kube-api-access-68mjc\") pod \"redhat-marketplace-5qc2d\" (UID: \"f080f1e2-edff-46ff-a81a-9cf0f4a8f50d\") " pod="openshift-marketplace/redhat-marketplace-5qc2d" Dec 06 07:03:26 crc kubenswrapper[4823]: I1206 07:03:26.247515 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5qc2d" Dec 06 07:03:26 crc kubenswrapper[4823]: I1206 07:03:26.794308 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5qc2d"] Dec 06 07:03:27 crc kubenswrapper[4823]: I1206 07:03:27.605821 4823 generic.go:334] "Generic (PLEG): container finished" podID="f080f1e2-edff-46ff-a81a-9cf0f4a8f50d" containerID="238e45f7b41994208728a2a545847add66b1186885bf218555f43a667454fc87" exitCode=0 Dec 06 07:03:27 crc kubenswrapper[4823]: I1206 07:03:27.606021 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5qc2d" event={"ID":"f080f1e2-edff-46ff-a81a-9cf0f4a8f50d","Type":"ContainerDied","Data":"238e45f7b41994208728a2a545847add66b1186885bf218555f43a667454fc87"} Dec 06 07:03:27 crc kubenswrapper[4823]: I1206 07:03:27.606135 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5qc2d" event={"ID":"f080f1e2-edff-46ff-a81a-9cf0f4a8f50d","Type":"ContainerStarted","Data":"bebe8f48e102a59e79596f2d85f683abbf8e99b32c746e7853e697207c2ee13c"} Dec 06 07:03:29 crc kubenswrapper[4823]: I1206 07:03:29.632088 4823 generic.go:334] "Generic (PLEG): container finished" podID="f080f1e2-edff-46ff-a81a-9cf0f4a8f50d" containerID="80b7c5739f6930b274fda352593f0d9b57bf0e428b127eb3d18f7c4e84029421" exitCode=0 Dec 06 07:03:29 crc kubenswrapper[4823]: I1206 07:03:29.632154 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5qc2d" event={"ID":"f080f1e2-edff-46ff-a81a-9cf0f4a8f50d","Type":"ContainerDied","Data":"80b7c5739f6930b274fda352593f0d9b57bf0e428b127eb3d18f7c4e84029421"} Dec 06 07:03:30 crc kubenswrapper[4823]: I1206 07:03:30.643334 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5qc2d" event={"ID":"f080f1e2-edff-46ff-a81a-9cf0f4a8f50d","Type":"ContainerStarted","Data":"ca65c351cadbb32f3650fe3bc8efb0e06e2da5e93a22feb42bf027bbad51bf80"} Dec 06 07:03:30 crc kubenswrapper[4823]: I1206 07:03:30.669396 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5qc2d" podStartSLOduration=3.244538493 podStartE2EDuration="5.669376706s" podCreationTimestamp="2025-12-06 07:03:25 +0000 UTC" firstStartedPulling="2025-12-06 07:03:27.607781436 +0000 UTC m=+2308.893533396" lastFinishedPulling="2025-12-06 07:03:30.032619649 +0000 UTC m=+2311.318371609" observedRunningTime="2025-12-06 07:03:30.666650427 +0000 UTC m=+2311.952402377" watchObservedRunningTime="2025-12-06 07:03:30.669376706 +0000 UTC m=+2311.955128666" Dec 06 07:03:34 crc kubenswrapper[4823]: I1206 07:03:34.141360 4823 scope.go:117] "RemoveContainer" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" Dec 06 07:03:34 crc kubenswrapper[4823]: E1206 07:03:34.141957 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:03:36 crc kubenswrapper[4823]: I1206 07:03:36.248175 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5qc2d" Dec 06 07:03:36 crc kubenswrapper[4823]: I1206 07:03:36.248494 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5qc2d" Dec 06 07:03:36 crc kubenswrapper[4823]: I1206 07:03:36.299377 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5qc2d" Dec 06 07:03:36 crc kubenswrapper[4823]: I1206 07:03:36.752166 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5qc2d" Dec 06 07:03:36 crc kubenswrapper[4823]: I1206 07:03:36.798861 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5qc2d"] Dec 06 07:03:38 crc kubenswrapper[4823]: I1206 07:03:38.727173 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5qc2d" podUID="f080f1e2-edff-46ff-a81a-9cf0f4a8f50d" containerName="registry-server" containerID="cri-o://ca65c351cadbb32f3650fe3bc8efb0e06e2da5e93a22feb42bf027bbad51bf80" gracePeriod=2 Dec 06 07:03:38 crc kubenswrapper[4823]: I1206 07:03:38.957656 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fs59n"] Dec 06 07:03:38 crc kubenswrapper[4823]: I1206 07:03:38.959689 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fs59n" Dec 06 07:03:38 crc kubenswrapper[4823]: I1206 07:03:38.978601 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fs59n"] Dec 06 07:03:39 crc kubenswrapper[4823]: I1206 07:03:39.133966 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwhth\" (UniqueName: \"kubernetes.io/projected/5b53b2b6-d051-49cd-928e-1d3767e01713-kube-api-access-mwhth\") pod \"community-operators-fs59n\" (UID: \"5b53b2b6-d051-49cd-928e-1d3767e01713\") " pod="openshift-marketplace/community-operators-fs59n" Dec 06 07:03:39 crc kubenswrapper[4823]: I1206 07:03:39.134151 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b53b2b6-d051-49cd-928e-1d3767e01713-utilities\") pod \"community-operators-fs59n\" (UID: \"5b53b2b6-d051-49cd-928e-1d3767e01713\") " pod="openshift-marketplace/community-operators-fs59n" Dec 06 07:03:39 crc kubenswrapper[4823]: I1206 07:03:39.134293 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b53b2b6-d051-49cd-928e-1d3767e01713-catalog-content\") pod \"community-operators-fs59n\" (UID: \"5b53b2b6-d051-49cd-928e-1d3767e01713\") " pod="openshift-marketplace/community-operators-fs59n" Dec 06 07:03:39 crc kubenswrapper[4823]: I1206 07:03:39.236696 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwhth\" (UniqueName: \"kubernetes.io/projected/5b53b2b6-d051-49cd-928e-1d3767e01713-kube-api-access-mwhth\") pod \"community-operators-fs59n\" (UID: \"5b53b2b6-d051-49cd-928e-1d3767e01713\") " pod="openshift-marketplace/community-operators-fs59n" Dec 06 07:03:39 crc kubenswrapper[4823]: I1206 07:03:39.236782 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b53b2b6-d051-49cd-928e-1d3767e01713-utilities\") pod \"community-operators-fs59n\" (UID: \"5b53b2b6-d051-49cd-928e-1d3767e01713\") " pod="openshift-marketplace/community-operators-fs59n" Dec 06 07:03:39 crc kubenswrapper[4823]: I1206 07:03:39.236826 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b53b2b6-d051-49cd-928e-1d3767e01713-catalog-content\") pod \"community-operators-fs59n\" (UID: \"5b53b2b6-d051-49cd-928e-1d3767e01713\") " pod="openshift-marketplace/community-operators-fs59n" Dec 06 07:03:39 crc kubenswrapper[4823]: I1206 07:03:39.237329 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b53b2b6-d051-49cd-928e-1d3767e01713-utilities\") pod \"community-operators-fs59n\" (UID: \"5b53b2b6-d051-49cd-928e-1d3767e01713\") " pod="openshift-marketplace/community-operators-fs59n" Dec 06 07:03:39 crc kubenswrapper[4823]: I1206 07:03:39.237398 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b53b2b6-d051-49cd-928e-1d3767e01713-catalog-content\") pod \"community-operators-fs59n\" (UID: \"5b53b2b6-d051-49cd-928e-1d3767e01713\") " pod="openshift-marketplace/community-operators-fs59n" Dec 06 07:03:39 crc kubenswrapper[4823]: I1206 07:03:39.260555 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwhth\" (UniqueName: \"kubernetes.io/projected/5b53b2b6-d051-49cd-928e-1d3767e01713-kube-api-access-mwhth\") pod \"community-operators-fs59n\" (UID: \"5b53b2b6-d051-49cd-928e-1d3767e01713\") " pod="openshift-marketplace/community-operators-fs59n" Dec 06 07:03:39 crc kubenswrapper[4823]: I1206 07:03:39.280724 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fs59n" Dec 06 07:03:39 crc kubenswrapper[4823]: I1206 07:03:39.742069 4823 generic.go:334] "Generic (PLEG): container finished" podID="f080f1e2-edff-46ff-a81a-9cf0f4a8f50d" containerID="ca65c351cadbb32f3650fe3bc8efb0e06e2da5e93a22feb42bf027bbad51bf80" exitCode=0 Dec 06 07:03:39 crc kubenswrapper[4823]: I1206 07:03:39.742159 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5qc2d" event={"ID":"f080f1e2-edff-46ff-a81a-9cf0f4a8f50d","Type":"ContainerDied","Data":"ca65c351cadbb32f3650fe3bc8efb0e06e2da5e93a22feb42bf027bbad51bf80"} Dec 06 07:03:39 crc kubenswrapper[4823]: I1206 07:03:39.852300 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fs59n"] Dec 06 07:03:40 crc kubenswrapper[4823]: I1206 07:03:40.072044 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5qc2d" Dec 06 07:03:40 crc kubenswrapper[4823]: I1206 07:03:40.171194 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68mjc\" (UniqueName: \"kubernetes.io/projected/f080f1e2-edff-46ff-a81a-9cf0f4a8f50d-kube-api-access-68mjc\") pod \"f080f1e2-edff-46ff-a81a-9cf0f4a8f50d\" (UID: \"f080f1e2-edff-46ff-a81a-9cf0f4a8f50d\") " Dec 06 07:03:40 crc kubenswrapper[4823]: I1206 07:03:40.171352 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f080f1e2-edff-46ff-a81a-9cf0f4a8f50d-catalog-content\") pod \"f080f1e2-edff-46ff-a81a-9cf0f4a8f50d\" (UID: \"f080f1e2-edff-46ff-a81a-9cf0f4a8f50d\") " Dec 06 07:03:40 crc kubenswrapper[4823]: I1206 07:03:40.171498 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f080f1e2-edff-46ff-a81a-9cf0f4a8f50d-utilities\") pod \"f080f1e2-edff-46ff-a81a-9cf0f4a8f50d\" (UID: \"f080f1e2-edff-46ff-a81a-9cf0f4a8f50d\") " Dec 06 07:03:40 crc kubenswrapper[4823]: I1206 07:03:40.172253 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f080f1e2-edff-46ff-a81a-9cf0f4a8f50d-utilities" (OuterVolumeSpecName: "utilities") pod "f080f1e2-edff-46ff-a81a-9cf0f4a8f50d" (UID: "f080f1e2-edff-46ff-a81a-9cf0f4a8f50d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:03:40 crc kubenswrapper[4823]: I1206 07:03:40.177881 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f080f1e2-edff-46ff-a81a-9cf0f4a8f50d-kube-api-access-68mjc" (OuterVolumeSpecName: "kube-api-access-68mjc") pod "f080f1e2-edff-46ff-a81a-9cf0f4a8f50d" (UID: "f080f1e2-edff-46ff-a81a-9cf0f4a8f50d"). InnerVolumeSpecName "kube-api-access-68mjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:03:40 crc kubenswrapper[4823]: I1206 07:03:40.190387 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f080f1e2-edff-46ff-a81a-9cf0f4a8f50d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f080f1e2-edff-46ff-a81a-9cf0f4a8f50d" (UID: "f080f1e2-edff-46ff-a81a-9cf0f4a8f50d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:03:40 crc kubenswrapper[4823]: I1206 07:03:40.274328 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68mjc\" (UniqueName: \"kubernetes.io/projected/f080f1e2-edff-46ff-a81a-9cf0f4a8f50d-kube-api-access-68mjc\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:40 crc kubenswrapper[4823]: I1206 07:03:40.274359 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f080f1e2-edff-46ff-a81a-9cf0f4a8f50d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:40 crc kubenswrapper[4823]: I1206 07:03:40.274368 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f080f1e2-edff-46ff-a81a-9cf0f4a8f50d-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:40 crc kubenswrapper[4823]: I1206 07:03:40.755554 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5qc2d" event={"ID":"f080f1e2-edff-46ff-a81a-9cf0f4a8f50d","Type":"ContainerDied","Data":"bebe8f48e102a59e79596f2d85f683abbf8e99b32c746e7853e697207c2ee13c"} Dec 06 07:03:40 crc kubenswrapper[4823]: I1206 07:03:40.755653 4823 scope.go:117] "RemoveContainer" containerID="ca65c351cadbb32f3650fe3bc8efb0e06e2da5e93a22feb42bf027bbad51bf80" Dec 06 07:03:40 crc kubenswrapper[4823]: I1206 07:03:40.755603 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5qc2d" Dec 06 07:03:40 crc kubenswrapper[4823]: I1206 07:03:40.757763 4823 generic.go:334] "Generic (PLEG): container finished" podID="5b53b2b6-d051-49cd-928e-1d3767e01713" containerID="064d231c1f4867a75b32d8be4e350dd58c1a7a7b7ebf77a39eba9f2283740f3b" exitCode=0 Dec 06 07:03:40 crc kubenswrapper[4823]: I1206 07:03:40.757815 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs59n" event={"ID":"5b53b2b6-d051-49cd-928e-1d3767e01713","Type":"ContainerDied","Data":"064d231c1f4867a75b32d8be4e350dd58c1a7a7b7ebf77a39eba9f2283740f3b"} Dec 06 07:03:40 crc kubenswrapper[4823]: I1206 07:03:40.757847 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs59n" event={"ID":"5b53b2b6-d051-49cd-928e-1d3767e01713","Type":"ContainerStarted","Data":"0f0427128d78b9f9c96a9123bc3e662080b5ae648764f65a31d969a872aee374"} Dec 06 07:03:40 crc kubenswrapper[4823]: I1206 07:03:40.778974 4823 scope.go:117] "RemoveContainer" containerID="80b7c5739f6930b274fda352593f0d9b57bf0e428b127eb3d18f7c4e84029421" Dec 06 07:03:40 crc kubenswrapper[4823]: I1206 07:03:40.808651 4823 scope.go:117] "RemoveContainer" containerID="238e45f7b41994208728a2a545847add66b1186885bf218555f43a667454fc87" Dec 06 07:03:40 crc kubenswrapper[4823]: I1206 07:03:40.818001 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5qc2d"] Dec 06 07:03:40 crc kubenswrapper[4823]: I1206 07:03:40.834796 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5qc2d"] Dec 06 07:03:41 crc kubenswrapper[4823]: I1206 07:03:41.167165 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f080f1e2-edff-46ff-a81a-9cf0f4a8f50d" path="/var/lib/kubelet/pods/f080f1e2-edff-46ff-a81a-9cf0f4a8f50d/volumes" Dec 06 07:03:43 crc kubenswrapper[4823]: I1206 07:03:43.789509 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs59n" event={"ID":"5b53b2b6-d051-49cd-928e-1d3767e01713","Type":"ContainerStarted","Data":"2b5e34aacc05c674e5844b2b85a4a9c5518592724d63156d81fd9bad1c637faf"} Dec 06 07:03:45 crc kubenswrapper[4823]: I1206 07:03:45.810287 4823 generic.go:334] "Generic (PLEG): container finished" podID="5b53b2b6-d051-49cd-928e-1d3767e01713" containerID="2b5e34aacc05c674e5844b2b85a4a9c5518592724d63156d81fd9bad1c637faf" exitCode=0 Dec 06 07:03:45 crc kubenswrapper[4823]: I1206 07:03:45.810332 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs59n" event={"ID":"5b53b2b6-d051-49cd-928e-1d3767e01713","Type":"ContainerDied","Data":"2b5e34aacc05c674e5844b2b85a4a9c5518592724d63156d81fd9bad1c637faf"} Dec 06 07:03:46 crc kubenswrapper[4823]: I1206 07:03:46.822638 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs59n" event={"ID":"5b53b2b6-d051-49cd-928e-1d3767e01713","Type":"ContainerStarted","Data":"e0182d6f4f5cc0bebe7e34badb7040158979dbe127c6a654b6b81beb85930283"} Dec 06 07:03:46 crc kubenswrapper[4823]: I1206 07:03:46.851760 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fs59n" podStartSLOduration=3.228961898 podStartE2EDuration="8.851733011s" podCreationTimestamp="2025-12-06 07:03:38 +0000 UTC" firstStartedPulling="2025-12-06 07:03:40.760313425 +0000 UTC m=+2322.046065385" lastFinishedPulling="2025-12-06 07:03:46.383084548 +0000 UTC m=+2327.668836498" observedRunningTime="2025-12-06 07:03:46.842535733 +0000 UTC m=+2328.128287723" watchObservedRunningTime="2025-12-06 07:03:46.851733011 +0000 UTC m=+2328.137484971" Dec 06 07:03:47 crc kubenswrapper[4823]: I1206 07:03:47.145056 4823 scope.go:117] "RemoveContainer" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" Dec 06 07:03:47 crc kubenswrapper[4823]: E1206 07:03:47.145795 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:03:49 crc kubenswrapper[4823]: I1206 07:03:49.281417 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fs59n" Dec 06 07:03:49 crc kubenswrapper[4823]: I1206 07:03:49.283283 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fs59n" Dec 06 07:03:49 crc kubenswrapper[4823]: I1206 07:03:49.337125 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fs59n" Dec 06 07:03:52 crc kubenswrapper[4823]: I1206 07:03:52.881267 4823 generic.go:334] "Generic (PLEG): container finished" podID="8a1945af-9fc9-4571-bd52-c93277ed8c64" containerID="236a11c146128fbb31002779f13022106b2b2eab1f0bfb2ae1b014d33526c948" exitCode=0 Dec 06 07:03:52 crc kubenswrapper[4823]: I1206 07:03:52.881332 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6" event={"ID":"8a1945af-9fc9-4571-bd52-c93277ed8c64","Type":"ContainerDied","Data":"236a11c146128fbb31002779f13022106b2b2eab1f0bfb2ae1b014d33526c948"} Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.353769 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6" Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.394135 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a1945af-9fc9-4571-bd52-c93277ed8c64-inventory\") pod \"8a1945af-9fc9-4571-bd52-c93277ed8c64\" (UID: \"8a1945af-9fc9-4571-bd52-c93277ed8c64\") " Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.394291 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a1945af-9fc9-4571-bd52-c93277ed8c64-ovn-combined-ca-bundle\") pod \"8a1945af-9fc9-4571-bd52-c93277ed8c64\" (UID: \"8a1945af-9fc9-4571-bd52-c93277ed8c64\") " Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.394358 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8a1945af-9fc9-4571-bd52-c93277ed8c64-ovncontroller-config-0\") pod \"8a1945af-9fc9-4571-bd52-c93277ed8c64\" (UID: \"8a1945af-9fc9-4571-bd52-c93277ed8c64\") " Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.394456 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdlvc\" (UniqueName: \"kubernetes.io/projected/8a1945af-9fc9-4571-bd52-c93277ed8c64-kube-api-access-kdlvc\") pod \"8a1945af-9fc9-4571-bd52-c93277ed8c64\" (UID: \"8a1945af-9fc9-4571-bd52-c93277ed8c64\") " Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.394499 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8a1945af-9fc9-4571-bd52-c93277ed8c64-ssh-key\") pod \"8a1945af-9fc9-4571-bd52-c93277ed8c64\" (UID: \"8a1945af-9fc9-4571-bd52-c93277ed8c64\") " Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.401724 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a1945af-9fc9-4571-bd52-c93277ed8c64-kube-api-access-kdlvc" (OuterVolumeSpecName: "kube-api-access-kdlvc") pod "8a1945af-9fc9-4571-bd52-c93277ed8c64" (UID: "8a1945af-9fc9-4571-bd52-c93277ed8c64"). InnerVolumeSpecName "kube-api-access-kdlvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.405297 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a1945af-9fc9-4571-bd52-c93277ed8c64-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "8a1945af-9fc9-4571-bd52-c93277ed8c64" (UID: "8a1945af-9fc9-4571-bd52-c93277ed8c64"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.426369 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a1945af-9fc9-4571-bd52-c93277ed8c64-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8a1945af-9fc9-4571-bd52-c93277ed8c64" (UID: "8a1945af-9fc9-4571-bd52-c93277ed8c64"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.428343 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a1945af-9fc9-4571-bd52-c93277ed8c64-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "8a1945af-9fc9-4571-bd52-c93277ed8c64" (UID: "8a1945af-9fc9-4571-bd52-c93277ed8c64"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.434420 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a1945af-9fc9-4571-bd52-c93277ed8c64-inventory" (OuterVolumeSpecName: "inventory") pod "8a1945af-9fc9-4571-bd52-c93277ed8c64" (UID: "8a1945af-9fc9-4571-bd52-c93277ed8c64"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.496547 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a1945af-9fc9-4571-bd52-c93277ed8c64-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.496600 4823 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a1945af-9fc9-4571-bd52-c93277ed8c64-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.496613 4823 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8a1945af-9fc9-4571-bd52-c93277ed8c64-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.496625 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdlvc\" (UniqueName: \"kubernetes.io/projected/8a1945af-9fc9-4571-bd52-c93277ed8c64-kube-api-access-kdlvc\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.496635 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8a1945af-9fc9-4571-bd52-c93277ed8c64-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.901299 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6" event={"ID":"8a1945af-9fc9-4571-bd52-c93277ed8c64","Type":"ContainerDied","Data":"bad2acd950496b8a7b1b0626a2274ef827ccd1700561855811b88a6b0d115e9a"} Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.901345 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bad2acd950496b8a7b1b0626a2274ef827ccd1700561855811b88a6b0d115e9a" Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.901389 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gpxd6" Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.991320 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2"] Dec 06 07:03:54 crc kubenswrapper[4823]: E1206 07:03:54.991942 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f080f1e2-edff-46ff-a81a-9cf0f4a8f50d" containerName="extract-content" Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.991968 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f080f1e2-edff-46ff-a81a-9cf0f4a8f50d" containerName="extract-content" Dec 06 07:03:54 crc kubenswrapper[4823]: E1206 07:03:54.992005 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f080f1e2-edff-46ff-a81a-9cf0f4a8f50d" containerName="extract-utilities" Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.992016 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f080f1e2-edff-46ff-a81a-9cf0f4a8f50d" containerName="extract-utilities" Dec 06 07:03:54 crc kubenswrapper[4823]: E1206 07:03:54.992039 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a1945af-9fc9-4571-bd52-c93277ed8c64" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.992048 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a1945af-9fc9-4571-bd52-c93277ed8c64" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 06 07:03:54 crc kubenswrapper[4823]: E1206 07:03:54.992067 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f080f1e2-edff-46ff-a81a-9cf0f4a8f50d" containerName="registry-server" Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.992077 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f080f1e2-edff-46ff-a81a-9cf0f4a8f50d" containerName="registry-server" Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.992329 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f080f1e2-edff-46ff-a81a-9cf0f4a8f50d" containerName="registry-server" Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.992354 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a1945af-9fc9-4571-bd52-c93277ed8c64" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.993252 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.995827 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.996121 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.996433 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.996580 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.996724 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xqh9k" Dec 06 07:03:54 crc kubenswrapper[4823]: I1206 07:03:54.996862 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 07:03:55 crc kubenswrapper[4823]: I1206 07:03:55.003996 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2"] Dec 06 07:03:55 crc kubenswrapper[4823]: I1206 07:03:55.004475 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2\" (UID: \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" Dec 06 07:03:55 crc kubenswrapper[4823]: I1206 07:03:55.004546 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2\" (UID: \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" Dec 06 07:03:55 crc kubenswrapper[4823]: I1206 07:03:55.004591 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2\" (UID: \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" Dec 06 07:03:55 crc kubenswrapper[4823]: I1206 07:03:55.004701 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2\" (UID: \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" Dec 06 07:03:55 crc kubenswrapper[4823]: I1206 07:03:55.004748 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxz89\" (UniqueName: \"kubernetes.io/projected/87f31194-7aad-4688-88b3-41c9ac8c2a6f-kube-api-access-dxz89\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2\" (UID: \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" Dec 06 07:03:55 crc kubenswrapper[4823]: I1206 07:03:55.004776 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2\" (UID: \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" Dec 06 07:03:55 crc kubenswrapper[4823]: I1206 07:03:55.106170 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2\" (UID: \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" Dec 06 07:03:55 crc kubenswrapper[4823]: I1206 07:03:55.106242 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxz89\" (UniqueName: \"kubernetes.io/projected/87f31194-7aad-4688-88b3-41c9ac8c2a6f-kube-api-access-dxz89\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2\" (UID: \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" Dec 06 07:03:55 crc kubenswrapper[4823]: I1206 07:03:55.106281 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2\" (UID: \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" Dec 06 07:03:55 crc kubenswrapper[4823]: I1206 07:03:55.106337 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2\" (UID: \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" Dec 06 07:03:55 crc kubenswrapper[4823]: I1206 07:03:55.106369 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2\" (UID: \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" Dec 06 07:03:55 crc kubenswrapper[4823]: I1206 07:03:55.106412 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2\" (UID: \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" Dec 06 07:03:55 crc kubenswrapper[4823]: I1206 07:03:55.110916 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2\" (UID: \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" Dec 06 07:03:55 crc kubenswrapper[4823]: I1206 07:03:55.111016 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2\" (UID: \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" Dec 06 07:03:55 crc kubenswrapper[4823]: I1206 07:03:55.112328 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2\" (UID: \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" Dec 06 07:03:55 crc kubenswrapper[4823]: I1206 07:03:55.114424 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2\" (UID: \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" Dec 06 07:03:55 crc kubenswrapper[4823]: I1206 07:03:55.117925 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2\" (UID: \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" Dec 06 07:03:55 crc kubenswrapper[4823]: I1206 07:03:55.125266 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxz89\" (UniqueName: \"kubernetes.io/projected/87f31194-7aad-4688-88b3-41c9ac8c2a6f-kube-api-access-dxz89\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2\" (UID: \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" Dec 06 07:03:55 crc kubenswrapper[4823]: I1206 07:03:55.311128 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" Dec 06 07:03:55 crc kubenswrapper[4823]: I1206 07:03:55.892048 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2"] Dec 06 07:03:55 crc kubenswrapper[4823]: W1206 07:03:55.895060 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87f31194_7aad_4688_88b3_41c9ac8c2a6f.slice/crio-fc36b0866aaf97d144ed25be060075f35f5e29e1919d83df486c26467aacf1fb WatchSource:0}: Error finding container fc36b0866aaf97d144ed25be060075f35f5e29e1919d83df486c26467aacf1fb: Status 404 returned error can't find the container with id fc36b0866aaf97d144ed25be060075f35f5e29e1919d83df486c26467aacf1fb Dec 06 07:03:55 crc kubenswrapper[4823]: I1206 07:03:55.910985 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" event={"ID":"87f31194-7aad-4688-88b3-41c9ac8c2a6f","Type":"ContainerStarted","Data":"fc36b0866aaf97d144ed25be060075f35f5e29e1919d83df486c26467aacf1fb"} Dec 06 07:03:56 crc kubenswrapper[4823]: I1206 07:03:56.924036 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" event={"ID":"87f31194-7aad-4688-88b3-41c9ac8c2a6f","Type":"ContainerStarted","Data":"21397d739b5fc2bbe105369aa8bc27a5c5bd5e1bcc792b92ac9152d5f2aab6a0"} Dec 06 07:03:56 crc kubenswrapper[4823]: I1206 07:03:56.948525 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" podStartSLOduration=2.448534213 podStartE2EDuration="2.948504429s" podCreationTimestamp="2025-12-06 07:03:54 +0000 UTC" firstStartedPulling="2025-12-06 07:03:55.898097434 +0000 UTC m=+2337.183849394" lastFinishedPulling="2025-12-06 07:03:56.39806764 +0000 UTC m=+2337.683819610" observedRunningTime="2025-12-06 07:03:56.941022011 +0000 UTC m=+2338.226774001" watchObservedRunningTime="2025-12-06 07:03:56.948504429 +0000 UTC m=+2338.234256389" Dec 06 07:03:59 crc kubenswrapper[4823]: I1206 07:03:59.148756 4823 scope.go:117] "RemoveContainer" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" Dec 06 07:03:59 crc kubenswrapper[4823]: E1206 07:03:59.149434 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:03:59 crc kubenswrapper[4823]: I1206 07:03:59.343700 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fs59n" Dec 06 07:03:59 crc kubenswrapper[4823]: I1206 07:03:59.419725 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fs59n"] Dec 06 07:03:59 crc kubenswrapper[4823]: I1206 07:03:59.956012 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fs59n" podUID="5b53b2b6-d051-49cd-928e-1d3767e01713" containerName="registry-server" containerID="cri-o://e0182d6f4f5cc0bebe7e34badb7040158979dbe127c6a654b6b81beb85930283" gracePeriod=2 Dec 06 07:04:00 crc kubenswrapper[4823]: I1206 07:04:00.503554 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fs59n" Dec 06 07:04:00 crc kubenswrapper[4823]: I1206 07:04:00.626409 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwhth\" (UniqueName: \"kubernetes.io/projected/5b53b2b6-d051-49cd-928e-1d3767e01713-kube-api-access-mwhth\") pod \"5b53b2b6-d051-49cd-928e-1d3767e01713\" (UID: \"5b53b2b6-d051-49cd-928e-1d3767e01713\") " Dec 06 07:04:00 crc kubenswrapper[4823]: I1206 07:04:00.626675 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b53b2b6-d051-49cd-928e-1d3767e01713-catalog-content\") pod \"5b53b2b6-d051-49cd-928e-1d3767e01713\" (UID: \"5b53b2b6-d051-49cd-928e-1d3767e01713\") " Dec 06 07:04:00 crc kubenswrapper[4823]: I1206 07:04:00.626753 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b53b2b6-d051-49cd-928e-1d3767e01713-utilities\") pod \"5b53b2b6-d051-49cd-928e-1d3767e01713\" (UID: \"5b53b2b6-d051-49cd-928e-1d3767e01713\") " Dec 06 07:04:00 crc kubenswrapper[4823]: I1206 07:04:00.627733 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b53b2b6-d051-49cd-928e-1d3767e01713-utilities" (OuterVolumeSpecName: "utilities") pod "5b53b2b6-d051-49cd-928e-1d3767e01713" (UID: "5b53b2b6-d051-49cd-928e-1d3767e01713"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:04:00 crc kubenswrapper[4823]: I1206 07:04:00.643672 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b53b2b6-d051-49cd-928e-1d3767e01713-kube-api-access-mwhth" (OuterVolumeSpecName: "kube-api-access-mwhth") pod "5b53b2b6-d051-49cd-928e-1d3767e01713" (UID: "5b53b2b6-d051-49cd-928e-1d3767e01713"). InnerVolumeSpecName "kube-api-access-mwhth". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:04:00 crc kubenswrapper[4823]: I1206 07:04:00.682829 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b53b2b6-d051-49cd-928e-1d3767e01713-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b53b2b6-d051-49cd-928e-1d3767e01713" (UID: "5b53b2b6-d051-49cd-928e-1d3767e01713"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:04:00 crc kubenswrapper[4823]: I1206 07:04:00.729152 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b53b2b6-d051-49cd-928e-1d3767e01713-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:00 crc kubenswrapper[4823]: I1206 07:04:00.729188 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b53b2b6-d051-49cd-928e-1d3767e01713-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:00 crc kubenswrapper[4823]: I1206 07:04:00.729203 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwhth\" (UniqueName: \"kubernetes.io/projected/5b53b2b6-d051-49cd-928e-1d3767e01713-kube-api-access-mwhth\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:00 crc kubenswrapper[4823]: I1206 07:04:00.970107 4823 generic.go:334] "Generic (PLEG): container finished" podID="5b53b2b6-d051-49cd-928e-1d3767e01713" containerID="e0182d6f4f5cc0bebe7e34badb7040158979dbe127c6a654b6b81beb85930283" exitCode=0 Dec 06 07:04:00 crc kubenswrapper[4823]: I1206 07:04:00.970154 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs59n" event={"ID":"5b53b2b6-d051-49cd-928e-1d3767e01713","Type":"ContainerDied","Data":"e0182d6f4f5cc0bebe7e34badb7040158979dbe127c6a654b6b81beb85930283"} Dec 06 07:04:00 crc kubenswrapper[4823]: I1206 07:04:00.970165 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fs59n" Dec 06 07:04:00 crc kubenswrapper[4823]: I1206 07:04:00.970185 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs59n" event={"ID":"5b53b2b6-d051-49cd-928e-1d3767e01713","Type":"ContainerDied","Data":"0f0427128d78b9f9c96a9123bc3e662080b5ae648764f65a31d969a872aee374"} Dec 06 07:04:00 crc kubenswrapper[4823]: I1206 07:04:00.970203 4823 scope.go:117] "RemoveContainer" containerID="e0182d6f4f5cc0bebe7e34badb7040158979dbe127c6a654b6b81beb85930283" Dec 06 07:04:01 crc kubenswrapper[4823]: I1206 07:04:01.000795 4823 scope.go:117] "RemoveContainer" containerID="2b5e34aacc05c674e5844b2b85a4a9c5518592724d63156d81fd9bad1c637faf" Dec 06 07:04:01 crc kubenswrapper[4823]: I1206 07:04:01.009396 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fs59n"] Dec 06 07:04:01 crc kubenswrapper[4823]: I1206 07:04:01.018818 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fs59n"] Dec 06 07:04:01 crc kubenswrapper[4823]: I1206 07:04:01.025450 4823 scope.go:117] "RemoveContainer" containerID="064d231c1f4867a75b32d8be4e350dd58c1a7a7b7ebf77a39eba9f2283740f3b" Dec 06 07:04:01 crc kubenswrapper[4823]: I1206 07:04:01.075580 4823 scope.go:117] "RemoveContainer" containerID="e0182d6f4f5cc0bebe7e34badb7040158979dbe127c6a654b6b81beb85930283" Dec 06 07:04:01 crc kubenswrapper[4823]: E1206 07:04:01.076254 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0182d6f4f5cc0bebe7e34badb7040158979dbe127c6a654b6b81beb85930283\": container with ID starting with e0182d6f4f5cc0bebe7e34badb7040158979dbe127c6a654b6b81beb85930283 not found: ID does not exist" containerID="e0182d6f4f5cc0bebe7e34badb7040158979dbe127c6a654b6b81beb85930283" Dec 06 07:04:01 crc kubenswrapper[4823]: I1206 07:04:01.076292 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0182d6f4f5cc0bebe7e34badb7040158979dbe127c6a654b6b81beb85930283"} err="failed to get container status \"e0182d6f4f5cc0bebe7e34badb7040158979dbe127c6a654b6b81beb85930283\": rpc error: code = NotFound desc = could not find container \"e0182d6f4f5cc0bebe7e34badb7040158979dbe127c6a654b6b81beb85930283\": container with ID starting with e0182d6f4f5cc0bebe7e34badb7040158979dbe127c6a654b6b81beb85930283 not found: ID does not exist" Dec 06 07:04:01 crc kubenswrapper[4823]: I1206 07:04:01.076319 4823 scope.go:117] "RemoveContainer" containerID="2b5e34aacc05c674e5844b2b85a4a9c5518592724d63156d81fd9bad1c637faf" Dec 06 07:04:01 crc kubenswrapper[4823]: E1206 07:04:01.076688 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b5e34aacc05c674e5844b2b85a4a9c5518592724d63156d81fd9bad1c637faf\": container with ID starting with 2b5e34aacc05c674e5844b2b85a4a9c5518592724d63156d81fd9bad1c637faf not found: ID does not exist" containerID="2b5e34aacc05c674e5844b2b85a4a9c5518592724d63156d81fd9bad1c637faf" Dec 06 07:04:01 crc kubenswrapper[4823]: I1206 07:04:01.076715 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b5e34aacc05c674e5844b2b85a4a9c5518592724d63156d81fd9bad1c637faf"} err="failed to get container status \"2b5e34aacc05c674e5844b2b85a4a9c5518592724d63156d81fd9bad1c637faf\": rpc error: code = NotFound desc = could not find container \"2b5e34aacc05c674e5844b2b85a4a9c5518592724d63156d81fd9bad1c637faf\": container with ID starting with 2b5e34aacc05c674e5844b2b85a4a9c5518592724d63156d81fd9bad1c637faf not found: ID does not exist" Dec 06 07:04:01 crc kubenswrapper[4823]: I1206 07:04:01.076731 4823 scope.go:117] "RemoveContainer" containerID="064d231c1f4867a75b32d8be4e350dd58c1a7a7b7ebf77a39eba9f2283740f3b" Dec 06 07:04:01 crc kubenswrapper[4823]: E1206 07:04:01.077006 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"064d231c1f4867a75b32d8be4e350dd58c1a7a7b7ebf77a39eba9f2283740f3b\": container with ID starting with 064d231c1f4867a75b32d8be4e350dd58c1a7a7b7ebf77a39eba9f2283740f3b not found: ID does not exist" containerID="064d231c1f4867a75b32d8be4e350dd58c1a7a7b7ebf77a39eba9f2283740f3b" Dec 06 07:04:01 crc kubenswrapper[4823]: I1206 07:04:01.077031 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"064d231c1f4867a75b32d8be4e350dd58c1a7a7b7ebf77a39eba9f2283740f3b"} err="failed to get container status \"064d231c1f4867a75b32d8be4e350dd58c1a7a7b7ebf77a39eba9f2283740f3b\": rpc error: code = NotFound desc = could not find container \"064d231c1f4867a75b32d8be4e350dd58c1a7a7b7ebf77a39eba9f2283740f3b\": container with ID starting with 064d231c1f4867a75b32d8be4e350dd58c1a7a7b7ebf77a39eba9f2283740f3b not found: ID does not exist" Dec 06 07:04:01 crc kubenswrapper[4823]: I1206 07:04:01.152948 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b53b2b6-d051-49cd-928e-1d3767e01713" path="/var/lib/kubelet/pods/5b53b2b6-d051-49cd-928e-1d3767e01713/volumes" Dec 06 07:04:10 crc kubenswrapper[4823]: I1206 07:04:10.141146 4823 scope.go:117] "RemoveContainer" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" Dec 06 07:04:10 crc kubenswrapper[4823]: E1206 07:04:10.142017 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:04:25 crc kubenswrapper[4823]: I1206 07:04:25.141364 4823 scope.go:117] "RemoveContainer" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" Dec 06 07:04:25 crc kubenswrapper[4823]: E1206 07:04:25.142494 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:04:37 crc kubenswrapper[4823]: I1206 07:04:37.142375 4823 scope.go:117] "RemoveContainer" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" Dec 06 07:04:37 crc kubenswrapper[4823]: E1206 07:04:37.143367 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:04:45 crc kubenswrapper[4823]: I1206 07:04:45.411961 4823 generic.go:334] "Generic (PLEG): container finished" podID="87f31194-7aad-4688-88b3-41c9ac8c2a6f" containerID="21397d739b5fc2bbe105369aa8bc27a5c5bd5e1bcc792b92ac9152d5f2aab6a0" exitCode=0 Dec 06 07:04:45 crc kubenswrapper[4823]: I1206 07:04:45.412033 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" event={"ID":"87f31194-7aad-4688-88b3-41c9ac8c2a6f","Type":"ContainerDied","Data":"21397d739b5fc2bbe105369aa8bc27a5c5bd5e1bcc792b92ac9152d5f2aab6a0"} Dec 06 07:04:46 crc kubenswrapper[4823]: I1206 07:04:46.891190 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" Dec 06 07:04:46 crc kubenswrapper[4823]: I1206 07:04:46.987083 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-inventory\") pod \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\" (UID: \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\") " Dec 06 07:04:46 crc kubenswrapper[4823]: I1206 07:04:46.987269 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxz89\" (UniqueName: \"kubernetes.io/projected/87f31194-7aad-4688-88b3-41c9ac8c2a6f-kube-api-access-dxz89\") pod \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\" (UID: \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\") " Dec 06 07:04:46 crc kubenswrapper[4823]: I1206 07:04:46.987338 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-nova-metadata-neutron-config-0\") pod \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\" (UID: \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\") " Dec 06 07:04:46 crc kubenswrapper[4823]: I1206 07:04:46.987375 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\" (UID: \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\") " Dec 06 07:04:46 crc kubenswrapper[4823]: I1206 07:04:46.987445 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-ssh-key\") pod \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\" (UID: \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\") " Dec 06 07:04:46 crc kubenswrapper[4823]: I1206 07:04:46.987471 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-neutron-metadata-combined-ca-bundle\") pod \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\" (UID: \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\") " Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:46.993619 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "87f31194-7aad-4688-88b3-41c9ac8c2a6f" (UID: "87f31194-7aad-4688-88b3-41c9ac8c2a6f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:46.993643 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f31194-7aad-4688-88b3-41c9ac8c2a6f-kube-api-access-dxz89" (OuterVolumeSpecName: "kube-api-access-dxz89") pod "87f31194-7aad-4688-88b3-41c9ac8c2a6f" (UID: "87f31194-7aad-4688-88b3-41c9ac8c2a6f"). InnerVolumeSpecName "kube-api-access-dxz89". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.017021 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "87f31194-7aad-4688-88b3-41c9ac8c2a6f" (UID: "87f31194-7aad-4688-88b3-41c9ac8c2a6f"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:04:47 crc kubenswrapper[4823]: E1206 07:04:47.019022 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-inventory podName:87f31194-7aad-4688-88b3-41c9ac8c2a6f nodeName:}" failed. No retries permitted until 2025-12-06 07:04:47.518998435 +0000 UTC m=+2388.804750405 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-inventory") pod "87f31194-7aad-4688-88b3-41c9ac8c2a6f" (UID: "87f31194-7aad-4688-88b3-41c9ac8c2a6f") : error deleting /var/lib/kubelet/pods/87f31194-7aad-4688-88b3-41c9ac8c2a6f/volume-subpaths: remove /var/lib/kubelet/pods/87f31194-7aad-4688-88b3-41c9ac8c2a6f/volume-subpaths: no such file or directory Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.020768 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "87f31194-7aad-4688-88b3-41c9ac8c2a6f" (UID: "87f31194-7aad-4688-88b3-41c9ac8c2a6f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.021696 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "87f31194-7aad-4688-88b3-41c9ac8c2a6f" (UID: "87f31194-7aad-4688-88b3-41c9ac8c2a6f"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.089548 4823 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.090175 4823 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.090279 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.090359 4823 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.090442 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxz89\" (UniqueName: \"kubernetes.io/projected/87f31194-7aad-4688-88b3-41c9ac8c2a6f-kube-api-access-dxz89\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.434014 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" event={"ID":"87f31194-7aad-4688-88b3-41c9ac8c2a6f","Type":"ContainerDied","Data":"fc36b0866aaf97d144ed25be060075f35f5e29e1919d83df486c26467aacf1fb"} Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.434075 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc36b0866aaf97d144ed25be060075f35f5e29e1919d83df486c26467aacf1fb" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.434089 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.566013 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq"] Dec 06 07:04:47 crc kubenswrapper[4823]: E1206 07:04:47.566476 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b53b2b6-d051-49cd-928e-1d3767e01713" containerName="extract-utilities" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.566500 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b53b2b6-d051-49cd-928e-1d3767e01713" containerName="extract-utilities" Dec 06 07:04:47 crc kubenswrapper[4823]: E1206 07:04:47.566527 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f31194-7aad-4688-88b3-41c9ac8c2a6f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.566536 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f31194-7aad-4688-88b3-41c9ac8c2a6f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 06 07:04:47 crc kubenswrapper[4823]: E1206 07:04:47.566564 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b53b2b6-d051-49cd-928e-1d3767e01713" containerName="extract-content" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.566574 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b53b2b6-d051-49cd-928e-1d3767e01713" containerName="extract-content" Dec 06 07:04:47 crc kubenswrapper[4823]: E1206 07:04:47.566584 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b53b2b6-d051-49cd-928e-1d3767e01713" containerName="registry-server" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.566591 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b53b2b6-d051-49cd-928e-1d3767e01713" containerName="registry-server" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.566815 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b53b2b6-d051-49cd-928e-1d3767e01713" containerName="registry-server" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.566828 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f31194-7aad-4688-88b3-41c9ac8c2a6f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.567621 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.572204 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.589474 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq"] Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.600030 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-inventory\") pod \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\" (UID: \"87f31194-7aad-4688-88b3-41c9ac8c2a6f\") " Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.600420 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7dszq\" (UID: \"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.600507 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7dszq\" (UID: \"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.600812 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt4tk\" (UniqueName: \"kubernetes.io/projected/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-kube-api-access-nt4tk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7dszq\" (UID: \"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.601152 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7dszq\" (UID: \"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.601334 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7dszq\" (UID: \"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.603861 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-inventory" (OuterVolumeSpecName: "inventory") pod "87f31194-7aad-4688-88b3-41c9ac8c2a6f" (UID: "87f31194-7aad-4688-88b3-41c9ac8c2a6f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.703265 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7dszq\" (UID: \"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.703371 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7dszq\" (UID: \"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.703423 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7dszq\" (UID: \"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.703452 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt4tk\" (UniqueName: \"kubernetes.io/projected/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-kube-api-access-nt4tk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7dszq\" (UID: \"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.703509 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7dszq\" (UID: \"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.703563 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87f31194-7aad-4688-88b3-41c9ac8c2a6f-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.707096 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7dszq\" (UID: \"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.707184 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7dszq\" (UID: \"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.708313 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7dszq\" (UID: \"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.708341 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7dszq\" (UID: \"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.723167 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt4tk\" (UniqueName: \"kubernetes.io/projected/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-kube-api-access-nt4tk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7dszq\" (UID: \"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq" Dec 06 07:04:47 crc kubenswrapper[4823]: I1206 07:04:47.888419 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq" Dec 06 07:04:48 crc kubenswrapper[4823]: I1206 07:04:48.141037 4823 scope.go:117] "RemoveContainer" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" Dec 06 07:04:48 crc kubenswrapper[4823]: E1206 07:04:48.141564 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:04:48 crc kubenswrapper[4823]: I1206 07:04:48.507307 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq"] Dec 06 07:04:49 crc kubenswrapper[4823]: I1206 07:04:49.454734 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq" event={"ID":"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19","Type":"ContainerStarted","Data":"ff9a0e75a8640015c2a254dd8669573409d969b4e29e74f97f644502d884f830"} Dec 06 07:04:49 crc kubenswrapper[4823]: I1206 07:04:49.455288 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq" event={"ID":"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19","Type":"ContainerStarted","Data":"178962762fb8510210f71dc07ac084ebda4ce593d20315ac21501a8872eeaecb"} Dec 06 07:04:49 crc kubenswrapper[4823]: I1206 07:04:49.476808 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq" podStartSLOduration=1.951068191 podStartE2EDuration="2.476778499s" podCreationTimestamp="2025-12-06 07:04:47 +0000 UTC" firstStartedPulling="2025-12-06 07:04:48.550856555 +0000 UTC m=+2389.836608515" lastFinishedPulling="2025-12-06 07:04:49.076566863 +0000 UTC m=+2390.362318823" observedRunningTime="2025-12-06 07:04:49.475057609 +0000 UTC m=+2390.760809569" watchObservedRunningTime="2025-12-06 07:04:49.476778499 +0000 UTC m=+2390.762530459" Dec 06 07:05:03 crc kubenswrapper[4823]: I1206 07:05:03.142104 4823 scope.go:117] "RemoveContainer" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" Dec 06 07:05:03 crc kubenswrapper[4823]: E1206 07:05:03.224246 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:05:16 crc kubenswrapper[4823]: I1206 07:05:16.142569 4823 scope.go:117] "RemoveContainer" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" Dec 06 07:05:16 crc kubenswrapper[4823]: E1206 07:05:16.145227 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:05:30 crc kubenswrapper[4823]: I1206 07:05:30.142510 4823 scope.go:117] "RemoveContainer" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" Dec 06 07:05:30 crc kubenswrapper[4823]: E1206 07:05:30.145102 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:05:42 crc kubenswrapper[4823]: I1206 07:05:42.140413 4823 scope.go:117] "RemoveContainer" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" Dec 06 07:05:42 crc kubenswrapper[4823]: E1206 07:05:42.141428 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:05:43 crc kubenswrapper[4823]: I1206 07:05:43.696830 4823 scope.go:117] "RemoveContainer" containerID="a5e12720a12e6dd9d4dd03512834eb5691bf01665cc87994048f6e35caaeb377" Dec 06 07:05:43 crc kubenswrapper[4823]: I1206 07:05:43.722130 4823 scope.go:117] "RemoveContainer" containerID="94819b16d3b61a47c1ea86c962707d2f2b3c1abd071af80a405d7bacfd1adf62" Dec 06 07:05:43 crc kubenswrapper[4823]: I1206 07:05:43.779425 4823 scope.go:117] "RemoveContainer" containerID="19cfeb8ce4f35bf424712be9d36154c512ed572a7311363dd7037e84fafc59b4" Dec 06 07:05:53 crc kubenswrapper[4823]: I1206 07:05:53.141207 4823 scope.go:117] "RemoveContainer" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" Dec 06 07:05:53 crc kubenswrapper[4823]: E1206 07:05:53.142163 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:06:06 crc kubenswrapper[4823]: I1206 07:06:06.141358 4823 scope.go:117] "RemoveContainer" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" Dec 06 07:06:06 crc kubenswrapper[4823]: E1206 07:06:06.142110 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:06:17 crc kubenswrapper[4823]: I1206 07:06:17.140817 4823 scope.go:117] "RemoveContainer" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" Dec 06 07:06:17 crc kubenswrapper[4823]: E1206 07:06:17.141560 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:06:28 crc kubenswrapper[4823]: I1206 07:06:28.141089 4823 scope.go:117] "RemoveContainer" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" Dec 06 07:06:28 crc kubenswrapper[4823]: E1206 07:06:28.143312 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:06:41 crc kubenswrapper[4823]: I1206 07:06:41.142305 4823 scope.go:117] "RemoveContainer" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" Dec 06 07:06:41 crc kubenswrapper[4823]: E1206 07:06:41.143060 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:06:53 crc kubenswrapper[4823]: I1206 07:06:53.141192 4823 scope.go:117] "RemoveContainer" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" Dec 06 07:06:53 crc kubenswrapper[4823]: E1206 07:06:53.141918 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:07:04 crc kubenswrapper[4823]: I1206 07:07:04.141808 4823 scope.go:117] "RemoveContainer" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" Dec 06 07:07:04 crc kubenswrapper[4823]: E1206 07:07:04.142741 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:07:15 crc kubenswrapper[4823]: I1206 07:07:15.141914 4823 scope.go:117] "RemoveContainer" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" Dec 06 07:07:15 crc kubenswrapper[4823]: E1206 07:07:15.142712 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:07:30 crc kubenswrapper[4823]: I1206 07:07:30.141887 4823 scope.go:117] "RemoveContainer" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" Dec 06 07:07:30 crc kubenswrapper[4823]: E1206 07:07:30.143098 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:07:43 crc kubenswrapper[4823]: I1206 07:07:43.141339 4823 scope.go:117] "RemoveContainer" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" Dec 06 07:07:43 crc kubenswrapper[4823]: E1206 07:07:43.142060 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:07:51 crc kubenswrapper[4823]: I1206 07:07:51.594042 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-z7czp" podUID="cb125116-0c3b-4831-a05c-9076f5360e28" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 07:07:51 crc kubenswrapper[4823]: I1206 07:07:51.740760 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lsjzk" podUID="94cf4797-42d3-4c53-9d68-93210ba23378" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.54:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 07:07:51 crc kubenswrapper[4823]: I1206 07:07:51.741133 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lsjzk" podUID="94cf4797-42d3-4c53-9d68-93210ba23378" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.54:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 07:07:52 crc kubenswrapper[4823]: I1206 07:07:52.345018 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-f8648f98b-jg4v8" podUID="3c12f95f-8514-4b08-8177-d95f8b0bc24d" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.55:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 07:07:52 crc kubenswrapper[4823]: I1206 07:07:52.345124 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-f8648f98b-jg4v8" podUID="3c12f95f-8514-4b08-8177-d95f8b0bc24d" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.55:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 07:07:52 crc kubenswrapper[4823]: I1206 07:07:52.792579 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-pw4mq" podUID="eb5ef3cd-9337-4665-945a-403b2619c53d" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 07:07:55 crc kubenswrapper[4823]: I1206 07:07:55.141851 4823 scope.go:117] "RemoveContainer" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" Dec 06 07:07:55 crc kubenswrapper[4823]: E1206 07:07:55.142582 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:08:10 crc kubenswrapper[4823]: I1206 07:08:10.141459 4823 scope.go:117] "RemoveContainer" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" Dec 06 07:08:10 crc kubenswrapper[4823]: I1206 07:08:10.990282 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerStarted","Data":"11d8ef1b4ff1a1c78e63818e4e070da86c8a1b557be15aa5354f8e9ec01a1273"} Dec 06 07:09:27 crc kubenswrapper[4823]: I1206 07:09:27.766892 4823 generic.go:334] "Generic (PLEG): container finished" podID="f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19" containerID="ff9a0e75a8640015c2a254dd8669573409d969b4e29e74f97f644502d884f830" exitCode=0 Dec 06 07:09:27 crc kubenswrapper[4823]: I1206 07:09:27.766986 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq" event={"ID":"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19","Type":"ContainerDied","Data":"ff9a0e75a8640015c2a254dd8669573409d969b4e29e74f97f644502d884f830"} Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.240045 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.337325 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-libvirt-secret-0\") pod \"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19\" (UID: \"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19\") " Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.337457 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-libvirt-combined-ca-bundle\") pod \"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19\" (UID: \"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19\") " Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.337478 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-ssh-key\") pod \"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19\" (UID: \"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19\") " Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.337527 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-inventory\") pod \"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19\" (UID: \"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19\") " Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.337583 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt4tk\" (UniqueName: \"kubernetes.io/projected/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-kube-api-access-nt4tk\") pod \"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19\" (UID: \"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19\") " Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.346715 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-kube-api-access-nt4tk" (OuterVolumeSpecName: "kube-api-access-nt4tk") pod "f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19" (UID: "f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19"). InnerVolumeSpecName "kube-api-access-nt4tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.352289 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19" (UID: "f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.373295 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19" (UID: "f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.373355 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-inventory" (OuterVolumeSpecName: "inventory") pod "f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19" (UID: "f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.374749 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19" (UID: "f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.457524 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.457973 4823 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.457997 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.458014 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt4tk\" (UniqueName: \"kubernetes.io/projected/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-kube-api-access-nt4tk\") on node \"crc\" DevicePath \"\"" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.458033 4823 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.786393 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq" event={"ID":"f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19","Type":"ContainerDied","Data":"178962762fb8510210f71dc07ac084ebda4ce593d20315ac21501a8872eeaecb"} Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.786456 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="178962762fb8510210f71dc07ac084ebda4ce593d20315ac21501a8872eeaecb" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.786788 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7dszq" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.897609 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6"] Dec 06 07:09:29 crc kubenswrapper[4823]: E1206 07:09:29.898064 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.898084 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.898319 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.899061 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.902718 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.902944 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.903054 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xqh9k" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.903245 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.903438 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.903567 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.903711 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.907579 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6"] Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.968147 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.968227 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.968260 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nmvl\" (UniqueName: \"kubernetes.io/projected/63f55880-0615-44ec-a7b5-318e731d45c1-kube-api-access-7nmvl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.968298 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.968366 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.968404 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.968585 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.968754 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/63f55880-0615-44ec-a7b5-318e731d45c1-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:29 crc kubenswrapper[4823]: I1206 07:09:29.968817 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:30 crc kubenswrapper[4823]: I1206 07:09:30.070430 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:30 crc kubenswrapper[4823]: I1206 07:09:30.070520 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/63f55880-0615-44ec-a7b5-318e731d45c1-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:30 crc kubenswrapper[4823]: I1206 07:09:30.070551 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:30 crc kubenswrapper[4823]: I1206 07:09:30.070620 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:30 crc kubenswrapper[4823]: I1206 07:09:30.070691 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:30 crc kubenswrapper[4823]: I1206 07:09:30.070728 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nmvl\" (UniqueName: \"kubernetes.io/projected/63f55880-0615-44ec-a7b5-318e731d45c1-kube-api-access-7nmvl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:30 crc kubenswrapper[4823]: I1206 07:09:30.070781 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:30 crc kubenswrapper[4823]: I1206 07:09:30.070822 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:30 crc kubenswrapper[4823]: I1206 07:09:30.070878 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:30 crc kubenswrapper[4823]: I1206 07:09:30.073016 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/63f55880-0615-44ec-a7b5-318e731d45c1-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:30 crc kubenswrapper[4823]: I1206 07:09:30.076708 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:30 crc kubenswrapper[4823]: I1206 07:09:30.076923 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:30 crc kubenswrapper[4823]: I1206 07:09:30.077197 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:30 crc kubenswrapper[4823]: I1206 07:09:30.077233 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:30 crc kubenswrapper[4823]: I1206 07:09:30.077973 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:30 crc kubenswrapper[4823]: I1206 07:09:30.078142 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:30 crc kubenswrapper[4823]: I1206 07:09:30.080034 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:30 crc kubenswrapper[4823]: I1206 07:09:30.097454 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nmvl\" (UniqueName: \"kubernetes.io/projected/63f55880-0615-44ec-a7b5-318e731d45c1-kube-api-access-7nmvl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gbdl6\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:30 crc kubenswrapper[4823]: I1206 07:09:30.259963 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:09:30 crc kubenswrapper[4823]: I1206 07:09:30.799825 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6"] Dec 06 07:09:30 crc kubenswrapper[4823]: I1206 07:09:30.813718 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 07:09:31 crc kubenswrapper[4823]: I1206 07:09:31.815306 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" event={"ID":"63f55880-0615-44ec-a7b5-318e731d45c1","Type":"ContainerStarted","Data":"223d1db347988fbd93e58bcf27edacff259aebaa7f33264b7fdd26641f82f11d"} Dec 06 07:09:31 crc kubenswrapper[4823]: I1206 07:09:31.815649 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" event={"ID":"63f55880-0615-44ec-a7b5-318e731d45c1","Type":"ContainerStarted","Data":"187cd0e718937fa18753c4e6dc1b6d486fcf29f6c636c6b1033f53ee64c3a401"} Dec 06 07:09:31 crc kubenswrapper[4823]: I1206 07:09:31.854434 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" podStartSLOduration=2.341587475 podStartE2EDuration="2.854385576s" podCreationTimestamp="2025-12-06 07:09:29 +0000 UTC" firstStartedPulling="2025-12-06 07:09:30.813444167 +0000 UTC m=+2672.099196127" lastFinishedPulling="2025-12-06 07:09:31.326242268 +0000 UTC m=+2672.611994228" observedRunningTime="2025-12-06 07:09:31.837445273 +0000 UTC m=+2673.123197233" watchObservedRunningTime="2025-12-06 07:09:31.854385576 +0000 UTC m=+2673.140137536" Dec 06 07:10:36 crc kubenswrapper[4823]: I1206 07:10:36.052416 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:10:36 crc kubenswrapper[4823]: I1206 07:10:36.053085 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:11:06 crc kubenswrapper[4823]: I1206 07:11:06.052113 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:11:06 crc kubenswrapper[4823]: I1206 07:11:06.052648 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:11:06 crc kubenswrapper[4823]: I1206 07:11:06.994683 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tll7v"] Dec 06 07:11:06 crc kubenswrapper[4823]: I1206 07:11:06.997155 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tll7v" Dec 06 07:11:07 crc kubenswrapper[4823]: I1206 07:11:07.035326 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tll7v"] Dec 06 07:11:07 crc kubenswrapper[4823]: I1206 07:11:07.067519 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6704713-5ff0-4b3e-83b9-8b08183dcb96-utilities\") pod \"redhat-operators-tll7v\" (UID: \"f6704713-5ff0-4b3e-83b9-8b08183dcb96\") " pod="openshift-marketplace/redhat-operators-tll7v" Dec 06 07:11:07 crc kubenswrapper[4823]: I1206 07:11:07.067596 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6704713-5ff0-4b3e-83b9-8b08183dcb96-catalog-content\") pod \"redhat-operators-tll7v\" (UID: \"f6704713-5ff0-4b3e-83b9-8b08183dcb96\") " pod="openshift-marketplace/redhat-operators-tll7v" Dec 06 07:11:07 crc kubenswrapper[4823]: I1206 07:11:07.067727 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk27s\" (UniqueName: \"kubernetes.io/projected/f6704713-5ff0-4b3e-83b9-8b08183dcb96-kube-api-access-dk27s\") pod \"redhat-operators-tll7v\" (UID: \"f6704713-5ff0-4b3e-83b9-8b08183dcb96\") " pod="openshift-marketplace/redhat-operators-tll7v" Dec 06 07:11:07 crc kubenswrapper[4823]: I1206 07:11:07.169421 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6704713-5ff0-4b3e-83b9-8b08183dcb96-utilities\") pod \"redhat-operators-tll7v\" (UID: \"f6704713-5ff0-4b3e-83b9-8b08183dcb96\") " pod="openshift-marketplace/redhat-operators-tll7v" Dec 06 07:11:07 crc kubenswrapper[4823]: I1206 07:11:07.169488 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6704713-5ff0-4b3e-83b9-8b08183dcb96-catalog-content\") pod \"redhat-operators-tll7v\" (UID: \"f6704713-5ff0-4b3e-83b9-8b08183dcb96\") " pod="openshift-marketplace/redhat-operators-tll7v" Dec 06 07:11:07 crc kubenswrapper[4823]: I1206 07:11:07.169588 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk27s\" (UniqueName: \"kubernetes.io/projected/f6704713-5ff0-4b3e-83b9-8b08183dcb96-kube-api-access-dk27s\") pod \"redhat-operators-tll7v\" (UID: \"f6704713-5ff0-4b3e-83b9-8b08183dcb96\") " pod="openshift-marketplace/redhat-operators-tll7v" Dec 06 07:11:07 crc kubenswrapper[4823]: I1206 07:11:07.170023 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6704713-5ff0-4b3e-83b9-8b08183dcb96-utilities\") pod \"redhat-operators-tll7v\" (UID: \"f6704713-5ff0-4b3e-83b9-8b08183dcb96\") " pod="openshift-marketplace/redhat-operators-tll7v" Dec 06 07:11:07 crc kubenswrapper[4823]: I1206 07:11:07.170066 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6704713-5ff0-4b3e-83b9-8b08183dcb96-catalog-content\") pod \"redhat-operators-tll7v\" (UID: \"f6704713-5ff0-4b3e-83b9-8b08183dcb96\") " pod="openshift-marketplace/redhat-operators-tll7v" Dec 06 07:11:07 crc kubenswrapper[4823]: I1206 07:11:07.190553 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk27s\" (UniqueName: \"kubernetes.io/projected/f6704713-5ff0-4b3e-83b9-8b08183dcb96-kube-api-access-dk27s\") pod \"redhat-operators-tll7v\" (UID: \"f6704713-5ff0-4b3e-83b9-8b08183dcb96\") " pod="openshift-marketplace/redhat-operators-tll7v" Dec 06 07:11:07 crc kubenswrapper[4823]: I1206 07:11:07.315401 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tll7v" Dec 06 07:11:07 crc kubenswrapper[4823]: I1206 07:11:07.801887 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tll7v"] Dec 06 07:11:08 crc kubenswrapper[4823]: I1206 07:11:08.727476 4823 generic.go:334] "Generic (PLEG): container finished" podID="f6704713-5ff0-4b3e-83b9-8b08183dcb96" containerID="a5519f335cfd8b1edbbdbd31610f7fe2338118f9026f63792a1d580c6e50cb77" exitCode=0 Dec 06 07:11:08 crc kubenswrapper[4823]: I1206 07:11:08.727541 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tll7v" event={"ID":"f6704713-5ff0-4b3e-83b9-8b08183dcb96","Type":"ContainerDied","Data":"a5519f335cfd8b1edbbdbd31610f7fe2338118f9026f63792a1d580c6e50cb77"} Dec 06 07:11:08 crc kubenswrapper[4823]: I1206 07:11:08.727788 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tll7v" event={"ID":"f6704713-5ff0-4b3e-83b9-8b08183dcb96","Type":"ContainerStarted","Data":"4fc0e4d9477f8ad46b07ff94863df42c96e21bf9be135e87927afb37631cf68f"} Dec 06 07:11:09 crc kubenswrapper[4823]: I1206 07:11:09.740547 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tll7v" event={"ID":"f6704713-5ff0-4b3e-83b9-8b08183dcb96","Type":"ContainerStarted","Data":"450cd209beeda00f4da60b23bfc2d7b0052931218adfe6f6712e78b1ef81a595"} Dec 06 07:11:15 crc kubenswrapper[4823]: I1206 07:11:15.834424 4823 generic.go:334] "Generic (PLEG): container finished" podID="f6704713-5ff0-4b3e-83b9-8b08183dcb96" containerID="450cd209beeda00f4da60b23bfc2d7b0052931218adfe6f6712e78b1ef81a595" exitCode=0 Dec 06 07:11:15 crc kubenswrapper[4823]: I1206 07:11:15.834446 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tll7v" event={"ID":"f6704713-5ff0-4b3e-83b9-8b08183dcb96","Type":"ContainerDied","Data":"450cd209beeda00f4da60b23bfc2d7b0052931218adfe6f6712e78b1ef81a595"} Dec 06 07:11:16 crc kubenswrapper[4823]: I1206 07:11:16.860946 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tll7v" event={"ID":"f6704713-5ff0-4b3e-83b9-8b08183dcb96","Type":"ContainerStarted","Data":"43327b46636ee1e14684029f3db04f9918f3828601f6e3975a927db8d88fde55"} Dec 06 07:11:16 crc kubenswrapper[4823]: I1206 07:11:16.899175 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tll7v" podStartSLOduration=3.375452544 podStartE2EDuration="10.899140655s" podCreationTimestamp="2025-12-06 07:11:06 +0000 UTC" firstStartedPulling="2025-12-06 07:11:08.729121544 +0000 UTC m=+2770.014873504" lastFinishedPulling="2025-12-06 07:11:16.252809655 +0000 UTC m=+2777.538561615" observedRunningTime="2025-12-06 07:11:16.897035854 +0000 UTC m=+2778.182787824" watchObservedRunningTime="2025-12-06 07:11:16.899140655 +0000 UTC m=+2778.184892615" Dec 06 07:11:17 crc kubenswrapper[4823]: I1206 07:11:17.315794 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tll7v" Dec 06 07:11:17 crc kubenswrapper[4823]: I1206 07:11:17.316711 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tll7v" Dec 06 07:11:18 crc kubenswrapper[4823]: I1206 07:11:18.364933 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tll7v" podUID="f6704713-5ff0-4b3e-83b9-8b08183dcb96" containerName="registry-server" probeResult="failure" output=< Dec 06 07:11:18 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Dec 06 07:11:18 crc kubenswrapper[4823]: > Dec 06 07:11:27 crc kubenswrapper[4823]: I1206 07:11:27.372625 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tll7v" Dec 06 07:11:27 crc kubenswrapper[4823]: I1206 07:11:27.421262 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tll7v" Dec 06 07:11:27 crc kubenswrapper[4823]: I1206 07:11:27.626822 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tll7v"] Dec 06 07:11:29 crc kubenswrapper[4823]: I1206 07:11:29.022359 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tll7v" podUID="f6704713-5ff0-4b3e-83b9-8b08183dcb96" containerName="registry-server" containerID="cri-o://43327b46636ee1e14684029f3db04f9918f3828601f6e3975a927db8d88fde55" gracePeriod=2 Dec 06 07:11:29 crc kubenswrapper[4823]: I1206 07:11:29.756160 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tll7v" Dec 06 07:11:29 crc kubenswrapper[4823]: I1206 07:11:29.854334 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6704713-5ff0-4b3e-83b9-8b08183dcb96-utilities\") pod \"f6704713-5ff0-4b3e-83b9-8b08183dcb96\" (UID: \"f6704713-5ff0-4b3e-83b9-8b08183dcb96\") " Dec 06 07:11:29 crc kubenswrapper[4823]: I1206 07:11:29.854552 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6704713-5ff0-4b3e-83b9-8b08183dcb96-catalog-content\") pod \"f6704713-5ff0-4b3e-83b9-8b08183dcb96\" (UID: \"f6704713-5ff0-4b3e-83b9-8b08183dcb96\") " Dec 06 07:11:29 crc kubenswrapper[4823]: I1206 07:11:29.854686 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk27s\" (UniqueName: \"kubernetes.io/projected/f6704713-5ff0-4b3e-83b9-8b08183dcb96-kube-api-access-dk27s\") pod \"f6704713-5ff0-4b3e-83b9-8b08183dcb96\" (UID: \"f6704713-5ff0-4b3e-83b9-8b08183dcb96\") " Dec 06 07:11:29 crc kubenswrapper[4823]: I1206 07:11:29.855501 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6704713-5ff0-4b3e-83b9-8b08183dcb96-utilities" (OuterVolumeSpecName: "utilities") pod "f6704713-5ff0-4b3e-83b9-8b08183dcb96" (UID: "f6704713-5ff0-4b3e-83b9-8b08183dcb96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:11:29 crc kubenswrapper[4823]: I1206 07:11:29.870239 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6704713-5ff0-4b3e-83b9-8b08183dcb96-kube-api-access-dk27s" (OuterVolumeSpecName: "kube-api-access-dk27s") pod "f6704713-5ff0-4b3e-83b9-8b08183dcb96" (UID: "f6704713-5ff0-4b3e-83b9-8b08183dcb96"). InnerVolumeSpecName "kube-api-access-dk27s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:11:29 crc kubenswrapper[4823]: I1206 07:11:29.957626 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6704713-5ff0-4b3e-83b9-8b08183dcb96-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:11:29 crc kubenswrapper[4823]: I1206 07:11:29.957679 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk27s\" (UniqueName: \"kubernetes.io/projected/f6704713-5ff0-4b3e-83b9-8b08183dcb96-kube-api-access-dk27s\") on node \"crc\" DevicePath \"\"" Dec 06 07:11:29 crc kubenswrapper[4823]: I1206 07:11:29.977197 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6704713-5ff0-4b3e-83b9-8b08183dcb96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6704713-5ff0-4b3e-83b9-8b08183dcb96" (UID: "f6704713-5ff0-4b3e-83b9-8b08183dcb96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:11:30 crc kubenswrapper[4823]: I1206 07:11:30.033463 4823 generic.go:334] "Generic (PLEG): container finished" podID="f6704713-5ff0-4b3e-83b9-8b08183dcb96" containerID="43327b46636ee1e14684029f3db04f9918f3828601f6e3975a927db8d88fde55" exitCode=0 Dec 06 07:11:30 crc kubenswrapper[4823]: I1206 07:11:30.033503 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tll7v" event={"ID":"f6704713-5ff0-4b3e-83b9-8b08183dcb96","Type":"ContainerDied","Data":"43327b46636ee1e14684029f3db04f9918f3828601f6e3975a927db8d88fde55"} Dec 06 07:11:30 crc kubenswrapper[4823]: I1206 07:11:30.033509 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tll7v" Dec 06 07:11:30 crc kubenswrapper[4823]: I1206 07:11:30.033530 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tll7v" event={"ID":"f6704713-5ff0-4b3e-83b9-8b08183dcb96","Type":"ContainerDied","Data":"4fc0e4d9477f8ad46b07ff94863df42c96e21bf9be135e87927afb37631cf68f"} Dec 06 07:11:30 crc kubenswrapper[4823]: I1206 07:11:30.033546 4823 scope.go:117] "RemoveContainer" containerID="43327b46636ee1e14684029f3db04f9918f3828601f6e3975a927db8d88fde55" Dec 06 07:11:30 crc kubenswrapper[4823]: I1206 07:11:30.069577 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6704713-5ff0-4b3e-83b9-8b08183dcb96-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:11:30 crc kubenswrapper[4823]: I1206 07:11:30.069966 4823 scope.go:117] "RemoveContainer" containerID="450cd209beeda00f4da60b23bfc2d7b0052931218adfe6f6712e78b1ef81a595" Dec 06 07:11:30 crc kubenswrapper[4823]: I1206 07:11:30.071967 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tll7v"] Dec 06 07:11:30 crc kubenswrapper[4823]: I1206 07:11:30.083558 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tll7v"] Dec 06 07:11:30 crc kubenswrapper[4823]: I1206 07:11:30.109044 4823 scope.go:117] "RemoveContainer" containerID="a5519f335cfd8b1edbbdbd31610f7fe2338118f9026f63792a1d580c6e50cb77" Dec 06 07:11:30 crc kubenswrapper[4823]: I1206 07:11:30.143485 4823 scope.go:117] "RemoveContainer" containerID="43327b46636ee1e14684029f3db04f9918f3828601f6e3975a927db8d88fde55" Dec 06 07:11:30 crc kubenswrapper[4823]: E1206 07:11:30.143988 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43327b46636ee1e14684029f3db04f9918f3828601f6e3975a927db8d88fde55\": container with ID starting with 43327b46636ee1e14684029f3db04f9918f3828601f6e3975a927db8d88fde55 not found: ID does not exist" containerID="43327b46636ee1e14684029f3db04f9918f3828601f6e3975a927db8d88fde55" Dec 06 07:11:30 crc kubenswrapper[4823]: I1206 07:11:30.144046 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43327b46636ee1e14684029f3db04f9918f3828601f6e3975a927db8d88fde55"} err="failed to get container status \"43327b46636ee1e14684029f3db04f9918f3828601f6e3975a927db8d88fde55\": rpc error: code = NotFound desc = could not find container \"43327b46636ee1e14684029f3db04f9918f3828601f6e3975a927db8d88fde55\": container with ID starting with 43327b46636ee1e14684029f3db04f9918f3828601f6e3975a927db8d88fde55 not found: ID does not exist" Dec 06 07:11:30 crc kubenswrapper[4823]: I1206 07:11:30.144073 4823 scope.go:117] "RemoveContainer" containerID="450cd209beeda00f4da60b23bfc2d7b0052931218adfe6f6712e78b1ef81a595" Dec 06 07:11:30 crc kubenswrapper[4823]: E1206 07:11:30.144372 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"450cd209beeda00f4da60b23bfc2d7b0052931218adfe6f6712e78b1ef81a595\": container with ID starting with 450cd209beeda00f4da60b23bfc2d7b0052931218adfe6f6712e78b1ef81a595 not found: ID does not exist" containerID="450cd209beeda00f4da60b23bfc2d7b0052931218adfe6f6712e78b1ef81a595" Dec 06 07:11:30 crc kubenswrapper[4823]: I1206 07:11:30.144395 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"450cd209beeda00f4da60b23bfc2d7b0052931218adfe6f6712e78b1ef81a595"} err="failed to get container status \"450cd209beeda00f4da60b23bfc2d7b0052931218adfe6f6712e78b1ef81a595\": rpc error: code = NotFound desc = could not find container \"450cd209beeda00f4da60b23bfc2d7b0052931218adfe6f6712e78b1ef81a595\": container with ID starting with 450cd209beeda00f4da60b23bfc2d7b0052931218adfe6f6712e78b1ef81a595 not found: ID does not exist" Dec 06 07:11:30 crc kubenswrapper[4823]: I1206 07:11:30.144413 4823 scope.go:117] "RemoveContainer" containerID="a5519f335cfd8b1edbbdbd31610f7fe2338118f9026f63792a1d580c6e50cb77" Dec 06 07:11:30 crc kubenswrapper[4823]: E1206 07:11:30.144634 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5519f335cfd8b1edbbdbd31610f7fe2338118f9026f63792a1d580c6e50cb77\": container with ID starting with a5519f335cfd8b1edbbdbd31610f7fe2338118f9026f63792a1d580c6e50cb77 not found: ID does not exist" containerID="a5519f335cfd8b1edbbdbd31610f7fe2338118f9026f63792a1d580c6e50cb77" Dec 06 07:11:30 crc kubenswrapper[4823]: I1206 07:11:30.144654 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5519f335cfd8b1edbbdbd31610f7fe2338118f9026f63792a1d580c6e50cb77"} err="failed to get container status \"a5519f335cfd8b1edbbdbd31610f7fe2338118f9026f63792a1d580c6e50cb77\": rpc error: code = NotFound desc = could not find container \"a5519f335cfd8b1edbbdbd31610f7fe2338118f9026f63792a1d580c6e50cb77\": container with ID starting with a5519f335cfd8b1edbbdbd31610f7fe2338118f9026f63792a1d580c6e50cb77 not found: ID does not exist" Dec 06 07:11:31 crc kubenswrapper[4823]: I1206 07:11:31.154396 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6704713-5ff0-4b3e-83b9-8b08183dcb96" path="/var/lib/kubelet/pods/f6704713-5ff0-4b3e-83b9-8b08183dcb96/volumes" Dec 06 07:11:36 crc kubenswrapper[4823]: I1206 07:11:36.052216 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:11:36 crc kubenswrapper[4823]: I1206 07:11:36.052869 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:11:36 crc kubenswrapper[4823]: I1206 07:11:36.052922 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 07:11:36 crc kubenswrapper[4823]: I1206 07:11:36.053522 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"11d8ef1b4ff1a1c78e63818e4e070da86c8a1b557be15aa5354f8e9ec01a1273"} pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:11:36 crc kubenswrapper[4823]: I1206 07:11:36.053578 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" containerID="cri-o://11d8ef1b4ff1a1c78e63818e4e070da86c8a1b557be15aa5354f8e9ec01a1273" gracePeriod=600 Dec 06 07:11:37 crc kubenswrapper[4823]: I1206 07:11:37.097177 4823 generic.go:334] "Generic (PLEG): container finished" podID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerID="11d8ef1b4ff1a1c78e63818e4e070da86c8a1b557be15aa5354f8e9ec01a1273" exitCode=0 Dec 06 07:11:37 crc kubenswrapper[4823]: I1206 07:11:37.097365 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerDied","Data":"11d8ef1b4ff1a1c78e63818e4e070da86c8a1b557be15aa5354f8e9ec01a1273"} Dec 06 07:11:37 crc kubenswrapper[4823]: I1206 07:11:37.097710 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerStarted","Data":"0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96"} Dec 06 07:11:37 crc kubenswrapper[4823]: I1206 07:11:37.097732 4823 scope.go:117] "RemoveContainer" containerID="1bc7853904711dfd885d012ea45cfd55af7e61ff78867193b8a901e7bbb7442e" Dec 06 07:12:20 crc kubenswrapper[4823]: I1206 07:12:20.575283 4823 generic.go:334] "Generic (PLEG): container finished" podID="63f55880-0615-44ec-a7b5-318e731d45c1" containerID="223d1db347988fbd93e58bcf27edacff259aebaa7f33264b7fdd26641f82f11d" exitCode=0 Dec 06 07:12:20 crc kubenswrapper[4823]: I1206 07:12:20.575341 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" event={"ID":"63f55880-0615-44ec-a7b5-318e731d45c1","Type":"ContainerDied","Data":"223d1db347988fbd93e58bcf27edacff259aebaa7f33264b7fdd26641f82f11d"} Dec 06 07:12:21 crc kubenswrapper[4823]: I1206 07:12:21.993907 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.185902 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-migration-ssh-key-1\") pod \"63f55880-0615-44ec-a7b5-318e731d45c1\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.186259 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-migration-ssh-key-0\") pod \"63f55880-0615-44ec-a7b5-318e731d45c1\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.186344 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-cell1-compute-config-0\") pod \"63f55880-0615-44ec-a7b5-318e731d45c1\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.186403 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-combined-ca-bundle\") pod \"63f55880-0615-44ec-a7b5-318e731d45c1\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.187194 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-cell1-compute-config-1\") pod \"63f55880-0615-44ec-a7b5-318e731d45c1\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.187268 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-inventory\") pod \"63f55880-0615-44ec-a7b5-318e731d45c1\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.187312 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/63f55880-0615-44ec-a7b5-318e731d45c1-nova-extra-config-0\") pod \"63f55880-0615-44ec-a7b5-318e731d45c1\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.187337 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-ssh-key\") pod \"63f55880-0615-44ec-a7b5-318e731d45c1\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.187381 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nmvl\" (UniqueName: \"kubernetes.io/projected/63f55880-0615-44ec-a7b5-318e731d45c1-kube-api-access-7nmvl\") pod \"63f55880-0615-44ec-a7b5-318e731d45c1\" (UID: \"63f55880-0615-44ec-a7b5-318e731d45c1\") " Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.192820 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63f55880-0615-44ec-a7b5-318e731d45c1-kube-api-access-7nmvl" (OuterVolumeSpecName: "kube-api-access-7nmvl") pod "63f55880-0615-44ec-a7b5-318e731d45c1" (UID: "63f55880-0615-44ec-a7b5-318e731d45c1"). InnerVolumeSpecName "kube-api-access-7nmvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.193022 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "63f55880-0615-44ec-a7b5-318e731d45c1" (UID: "63f55880-0615-44ec-a7b5-318e731d45c1"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.216003 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "63f55880-0615-44ec-a7b5-318e731d45c1" (UID: "63f55880-0615-44ec-a7b5-318e731d45c1"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.217737 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "63f55880-0615-44ec-a7b5-318e731d45c1" (UID: "63f55880-0615-44ec-a7b5-318e731d45c1"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.219562 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-inventory" (OuterVolumeSpecName: "inventory") pod "63f55880-0615-44ec-a7b5-318e731d45c1" (UID: "63f55880-0615-44ec-a7b5-318e731d45c1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.219991 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63f55880-0615-44ec-a7b5-318e731d45c1-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "63f55880-0615-44ec-a7b5-318e731d45c1" (UID: "63f55880-0615-44ec-a7b5-318e731d45c1"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.221441 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "63f55880-0615-44ec-a7b5-318e731d45c1" (UID: "63f55880-0615-44ec-a7b5-318e731d45c1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.226780 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "63f55880-0615-44ec-a7b5-318e731d45c1" (UID: "63f55880-0615-44ec-a7b5-318e731d45c1"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.228930 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "63f55880-0615-44ec-a7b5-318e731d45c1" (UID: "63f55880-0615-44ec-a7b5-318e731d45c1"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.289433 4823 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.289484 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.289498 4823 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/63f55880-0615-44ec-a7b5-318e731d45c1-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.289510 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.289527 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nmvl\" (UniqueName: \"kubernetes.io/projected/63f55880-0615-44ec-a7b5-318e731d45c1-kube-api-access-7nmvl\") on node \"crc\" DevicePath \"\"" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.289538 4823 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.289548 4823 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.289559 4823 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.289572 4823 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f55880-0615-44ec-a7b5-318e731d45c1-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.596292 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" event={"ID":"63f55880-0615-44ec-a7b5-318e731d45c1","Type":"ContainerDied","Data":"187cd0e718937fa18753c4e6dc1b6d486fcf29f6c636c6b1033f53ee64c3a401"} Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.596331 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="187cd0e718937fa18753c4e6dc1b6d486fcf29f6c636c6b1033f53ee64c3a401" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.596380 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gbdl6" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.701088 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs"] Dec 06 07:12:22 crc kubenswrapper[4823]: E1206 07:12:22.701815 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6704713-5ff0-4b3e-83b9-8b08183dcb96" containerName="extract-content" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.701844 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6704713-5ff0-4b3e-83b9-8b08183dcb96" containerName="extract-content" Dec 06 07:12:22 crc kubenswrapper[4823]: E1206 07:12:22.701865 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63f55880-0615-44ec-a7b5-318e731d45c1" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.701878 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="63f55880-0615-44ec-a7b5-318e731d45c1" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 06 07:12:22 crc kubenswrapper[4823]: E1206 07:12:22.701896 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6704713-5ff0-4b3e-83b9-8b08183dcb96" containerName="extract-utilities" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.701909 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6704713-5ff0-4b3e-83b9-8b08183dcb96" containerName="extract-utilities" Dec 06 07:12:22 crc kubenswrapper[4823]: E1206 07:12:22.701948 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6704713-5ff0-4b3e-83b9-8b08183dcb96" containerName="registry-server" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.701960 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6704713-5ff0-4b3e-83b9-8b08183dcb96" containerName="registry-server" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.702275 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="63f55880-0615-44ec-a7b5-318e731d45c1" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.702309 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6704713-5ff0-4b3e-83b9-8b08183dcb96" containerName="registry-server" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.703475 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.705852 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.706376 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.708036 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.708137 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xqh9k" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.708789 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.711559 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs"] Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.800570 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.800633 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.800828 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.800998 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.801077 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.801113 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.801224 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwl9h\" (UniqueName: \"kubernetes.io/projected/b7b49501-c951-4829-8791-c27d6e01a606-kube-api-access-rwl9h\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.903537 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwl9h\" (UniqueName: \"kubernetes.io/projected/b7b49501-c951-4829-8791-c27d6e01a606-kube-api-access-rwl9h\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.903721 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.903759 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.903827 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.903898 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.903940 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.903967 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.909954 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.910030 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.910856 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.912386 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.920025 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.922410 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwl9h\" (UniqueName: \"kubernetes.io/projected/b7b49501-c951-4829-8791-c27d6e01a606-kube-api-access-rwl9h\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" Dec 06 07:12:22 crc kubenswrapper[4823]: I1206 07:12:22.922926 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" Dec 06 07:12:23 crc kubenswrapper[4823]: I1206 07:12:23.039021 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" Dec 06 07:12:23 crc kubenswrapper[4823]: I1206 07:12:23.624266 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs"] Dec 06 07:12:24 crc kubenswrapper[4823]: I1206 07:12:24.614955 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" event={"ID":"b7b49501-c951-4829-8791-c27d6e01a606","Type":"ContainerStarted","Data":"8b401a9fadf8877253f1e61382cc87a7252c5eebdf2f9cad119f129b735c66ba"} Dec 06 07:12:24 crc kubenswrapper[4823]: I1206 07:12:24.615465 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" event={"ID":"b7b49501-c951-4829-8791-c27d6e01a606","Type":"ContainerStarted","Data":"37e27a67a8ce1b2b63caee9850b8f46ae5fe89c05cb31c43753dd63d47936c44"} Dec 06 07:13:21 crc kubenswrapper[4823]: I1206 07:13:21.552532 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" podStartSLOduration=59.103716162 podStartE2EDuration="59.552496939s" podCreationTimestamp="2025-12-06 07:12:22 +0000 UTC" firstStartedPulling="2025-12-06 07:12:23.628907139 +0000 UTC m=+2844.914659099" lastFinishedPulling="2025-12-06 07:12:24.077687906 +0000 UTC m=+2845.363439876" observedRunningTime="2025-12-06 07:12:24.63896644 +0000 UTC m=+2845.924718400" watchObservedRunningTime="2025-12-06 07:13:21.552496939 +0000 UTC m=+2902.838248899" Dec 06 07:13:21 crc kubenswrapper[4823]: I1206 07:13:21.562002 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-96wxx"] Dec 06 07:13:21 crc kubenswrapper[4823]: I1206 07:13:21.564903 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-96wxx" Dec 06 07:13:21 crc kubenswrapper[4823]: I1206 07:13:21.573980 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-96wxx"] Dec 06 07:13:21 crc kubenswrapper[4823]: I1206 07:13:21.694117 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4sxh\" (UniqueName: \"kubernetes.io/projected/355d75fc-594a-4495-b678-ac153628eff8-kube-api-access-z4sxh\") pod \"certified-operators-96wxx\" (UID: \"355d75fc-594a-4495-b678-ac153628eff8\") " pod="openshift-marketplace/certified-operators-96wxx" Dec 06 07:13:21 crc kubenswrapper[4823]: I1206 07:13:21.694442 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/355d75fc-594a-4495-b678-ac153628eff8-catalog-content\") pod \"certified-operators-96wxx\" (UID: \"355d75fc-594a-4495-b678-ac153628eff8\") " pod="openshift-marketplace/certified-operators-96wxx" Dec 06 07:13:21 crc kubenswrapper[4823]: I1206 07:13:21.694637 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/355d75fc-594a-4495-b678-ac153628eff8-utilities\") pod \"certified-operators-96wxx\" (UID: \"355d75fc-594a-4495-b678-ac153628eff8\") " pod="openshift-marketplace/certified-operators-96wxx" Dec 06 07:13:21 crc kubenswrapper[4823]: I1206 07:13:21.797041 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/355d75fc-594a-4495-b678-ac153628eff8-utilities\") pod \"certified-operators-96wxx\" (UID: \"355d75fc-594a-4495-b678-ac153628eff8\") " pod="openshift-marketplace/certified-operators-96wxx" Dec 06 07:13:21 crc kubenswrapper[4823]: I1206 07:13:21.797220 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4sxh\" (UniqueName: \"kubernetes.io/projected/355d75fc-594a-4495-b678-ac153628eff8-kube-api-access-z4sxh\") pod \"certified-operators-96wxx\" (UID: \"355d75fc-594a-4495-b678-ac153628eff8\") " pod="openshift-marketplace/certified-operators-96wxx" Dec 06 07:13:21 crc kubenswrapper[4823]: I1206 07:13:21.797267 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/355d75fc-594a-4495-b678-ac153628eff8-catalog-content\") pod \"certified-operators-96wxx\" (UID: \"355d75fc-594a-4495-b678-ac153628eff8\") " pod="openshift-marketplace/certified-operators-96wxx" Dec 06 07:13:21 crc kubenswrapper[4823]: I1206 07:13:21.797654 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/355d75fc-594a-4495-b678-ac153628eff8-catalog-content\") pod \"certified-operators-96wxx\" (UID: \"355d75fc-594a-4495-b678-ac153628eff8\") " pod="openshift-marketplace/certified-operators-96wxx" Dec 06 07:13:21 crc kubenswrapper[4823]: I1206 07:13:21.797650 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/355d75fc-594a-4495-b678-ac153628eff8-utilities\") pod \"certified-operators-96wxx\" (UID: \"355d75fc-594a-4495-b678-ac153628eff8\") " pod="openshift-marketplace/certified-operators-96wxx" Dec 06 07:13:21 crc kubenswrapper[4823]: I1206 07:13:21.819344 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4sxh\" (UniqueName: \"kubernetes.io/projected/355d75fc-594a-4495-b678-ac153628eff8-kube-api-access-z4sxh\") pod \"certified-operators-96wxx\" (UID: \"355d75fc-594a-4495-b678-ac153628eff8\") " pod="openshift-marketplace/certified-operators-96wxx" Dec 06 07:13:21 crc kubenswrapper[4823]: I1206 07:13:21.899240 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-96wxx" Dec 06 07:13:22 crc kubenswrapper[4823]: I1206 07:13:22.428201 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-96wxx"] Dec 06 07:13:23 crc kubenswrapper[4823]: I1206 07:13:23.412065 4823 generic.go:334] "Generic (PLEG): container finished" podID="355d75fc-594a-4495-b678-ac153628eff8" containerID="781dad1cf5b3c89c9941ebf9e468e2823f2f6809fb3b2ff888c87bad2b4dd054" exitCode=0 Dec 06 07:13:23 crc kubenswrapper[4823]: I1206 07:13:23.412188 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96wxx" event={"ID":"355d75fc-594a-4495-b678-ac153628eff8","Type":"ContainerDied","Data":"781dad1cf5b3c89c9941ebf9e468e2823f2f6809fb3b2ff888c87bad2b4dd054"} Dec 06 07:13:23 crc kubenswrapper[4823]: I1206 07:13:23.412432 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96wxx" event={"ID":"355d75fc-594a-4495-b678-ac153628eff8","Type":"ContainerStarted","Data":"bb88cffaf7f7aa4b1eb0346835d5a48a6e8b934428e9907a5c215338c4b1d541"} Dec 06 07:13:25 crc kubenswrapper[4823]: I1206 07:13:25.439909 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96wxx" event={"ID":"355d75fc-594a-4495-b678-ac153628eff8","Type":"ContainerStarted","Data":"c0bb60651459d8703468b2616795abd48c2e1cf2d0f1270805375b141a139413"} Dec 06 07:13:26 crc kubenswrapper[4823]: I1206 07:13:26.451599 4823 generic.go:334] "Generic (PLEG): container finished" podID="355d75fc-594a-4495-b678-ac153628eff8" containerID="c0bb60651459d8703468b2616795abd48c2e1cf2d0f1270805375b141a139413" exitCode=0 Dec 06 07:13:26 crc kubenswrapper[4823]: I1206 07:13:26.451773 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96wxx" event={"ID":"355d75fc-594a-4495-b678-ac153628eff8","Type":"ContainerDied","Data":"c0bb60651459d8703468b2616795abd48c2e1cf2d0f1270805375b141a139413"} Dec 06 07:13:27 crc kubenswrapper[4823]: I1206 07:13:27.567163 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96wxx" event={"ID":"355d75fc-594a-4495-b678-ac153628eff8","Type":"ContainerStarted","Data":"6478175f7fb7b65e92f7bedd7fe5e07c34f9eb76778683893ebb1f4688b95597"} Dec 06 07:13:27 crc kubenswrapper[4823]: I1206 07:13:27.592201 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-96wxx" podStartSLOduration=3.086856053 podStartE2EDuration="6.59217193s" podCreationTimestamp="2025-12-06 07:13:21 +0000 UTC" firstStartedPulling="2025-12-06 07:13:23.414812465 +0000 UTC m=+2904.700564425" lastFinishedPulling="2025-12-06 07:13:26.920128332 +0000 UTC m=+2908.205880302" observedRunningTime="2025-12-06 07:13:27.585071213 +0000 UTC m=+2908.870823173" watchObservedRunningTime="2025-12-06 07:13:27.59217193 +0000 UTC m=+2908.877923890" Dec 06 07:13:31 crc kubenswrapper[4823]: I1206 07:13:31.899961 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-96wxx" Dec 06 07:13:31 crc kubenswrapper[4823]: I1206 07:13:31.900594 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-96wxx" Dec 06 07:13:31 crc kubenswrapper[4823]: I1206 07:13:31.957853 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-96wxx" Dec 06 07:13:32 crc kubenswrapper[4823]: I1206 07:13:32.711384 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-96wxx" Dec 06 07:13:33 crc kubenswrapper[4823]: I1206 07:13:33.721099 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-96wxx"] Dec 06 07:13:34 crc kubenswrapper[4823]: I1206 07:13:34.689496 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-96wxx" podUID="355d75fc-594a-4495-b678-ac153628eff8" containerName="registry-server" containerID="cri-o://6478175f7fb7b65e92f7bedd7fe5e07c34f9eb76778683893ebb1f4688b95597" gracePeriod=2 Dec 06 07:13:35 crc kubenswrapper[4823]: I1206 07:13:35.272634 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-96wxx" Dec 06 07:13:35 crc kubenswrapper[4823]: I1206 07:13:35.307874 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/355d75fc-594a-4495-b678-ac153628eff8-catalog-content\") pod \"355d75fc-594a-4495-b678-ac153628eff8\" (UID: \"355d75fc-594a-4495-b678-ac153628eff8\") " Dec 06 07:13:35 crc kubenswrapper[4823]: I1206 07:13:35.307975 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/355d75fc-594a-4495-b678-ac153628eff8-utilities\") pod \"355d75fc-594a-4495-b678-ac153628eff8\" (UID: \"355d75fc-594a-4495-b678-ac153628eff8\") " Dec 06 07:13:35 crc kubenswrapper[4823]: I1206 07:13:35.308068 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4sxh\" (UniqueName: \"kubernetes.io/projected/355d75fc-594a-4495-b678-ac153628eff8-kube-api-access-z4sxh\") pod \"355d75fc-594a-4495-b678-ac153628eff8\" (UID: \"355d75fc-594a-4495-b678-ac153628eff8\") " Dec 06 07:13:35 crc kubenswrapper[4823]: I1206 07:13:35.308978 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/355d75fc-594a-4495-b678-ac153628eff8-utilities" (OuterVolumeSpecName: "utilities") pod "355d75fc-594a-4495-b678-ac153628eff8" (UID: "355d75fc-594a-4495-b678-ac153628eff8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:13:35 crc kubenswrapper[4823]: I1206 07:13:35.309882 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/355d75fc-594a-4495-b678-ac153628eff8-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:35 crc kubenswrapper[4823]: I1206 07:13:35.314625 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/355d75fc-594a-4495-b678-ac153628eff8-kube-api-access-z4sxh" (OuterVolumeSpecName: "kube-api-access-z4sxh") pod "355d75fc-594a-4495-b678-ac153628eff8" (UID: "355d75fc-594a-4495-b678-ac153628eff8"). InnerVolumeSpecName "kube-api-access-z4sxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:13:35 crc kubenswrapper[4823]: I1206 07:13:35.369076 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/355d75fc-594a-4495-b678-ac153628eff8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "355d75fc-594a-4495-b678-ac153628eff8" (UID: "355d75fc-594a-4495-b678-ac153628eff8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:13:35 crc kubenswrapper[4823]: I1206 07:13:35.411701 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/355d75fc-594a-4495-b678-ac153628eff8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:35 crc kubenswrapper[4823]: I1206 07:13:35.411739 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4sxh\" (UniqueName: \"kubernetes.io/projected/355d75fc-594a-4495-b678-ac153628eff8-kube-api-access-z4sxh\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:35 crc kubenswrapper[4823]: I1206 07:13:35.702410 4823 generic.go:334] "Generic (PLEG): container finished" podID="355d75fc-594a-4495-b678-ac153628eff8" containerID="6478175f7fb7b65e92f7bedd7fe5e07c34f9eb76778683893ebb1f4688b95597" exitCode=0 Dec 06 07:13:35 crc kubenswrapper[4823]: I1206 07:13:35.702440 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-96wxx" Dec 06 07:13:35 crc kubenswrapper[4823]: I1206 07:13:35.702473 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96wxx" event={"ID":"355d75fc-594a-4495-b678-ac153628eff8","Type":"ContainerDied","Data":"6478175f7fb7b65e92f7bedd7fe5e07c34f9eb76778683893ebb1f4688b95597"} Dec 06 07:13:35 crc kubenswrapper[4823]: I1206 07:13:35.702980 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96wxx" event={"ID":"355d75fc-594a-4495-b678-ac153628eff8","Type":"ContainerDied","Data":"bb88cffaf7f7aa4b1eb0346835d5a48a6e8b934428e9907a5c215338c4b1d541"} Dec 06 07:13:35 crc kubenswrapper[4823]: I1206 07:13:35.703037 4823 scope.go:117] "RemoveContainer" containerID="6478175f7fb7b65e92f7bedd7fe5e07c34f9eb76778683893ebb1f4688b95597" Dec 06 07:13:35 crc kubenswrapper[4823]: I1206 07:13:35.732251 4823 scope.go:117] "RemoveContainer" containerID="c0bb60651459d8703468b2616795abd48c2e1cf2d0f1270805375b141a139413" Dec 06 07:13:35 crc kubenswrapper[4823]: I1206 07:13:35.743133 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-96wxx"] Dec 06 07:13:35 crc kubenswrapper[4823]: I1206 07:13:35.757313 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-96wxx"] Dec 06 07:13:35 crc kubenswrapper[4823]: I1206 07:13:35.761783 4823 scope.go:117] "RemoveContainer" containerID="781dad1cf5b3c89c9941ebf9e468e2823f2f6809fb3b2ff888c87bad2b4dd054" Dec 06 07:13:35 crc kubenswrapper[4823]: I1206 07:13:35.835446 4823 scope.go:117] "RemoveContainer" containerID="6478175f7fb7b65e92f7bedd7fe5e07c34f9eb76778683893ebb1f4688b95597" Dec 06 07:13:35 crc kubenswrapper[4823]: E1206 07:13:35.836436 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6478175f7fb7b65e92f7bedd7fe5e07c34f9eb76778683893ebb1f4688b95597\": container with ID starting with 6478175f7fb7b65e92f7bedd7fe5e07c34f9eb76778683893ebb1f4688b95597 not found: ID does not exist" containerID="6478175f7fb7b65e92f7bedd7fe5e07c34f9eb76778683893ebb1f4688b95597" Dec 06 07:13:35 crc kubenswrapper[4823]: I1206 07:13:35.836522 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6478175f7fb7b65e92f7bedd7fe5e07c34f9eb76778683893ebb1f4688b95597"} err="failed to get container status \"6478175f7fb7b65e92f7bedd7fe5e07c34f9eb76778683893ebb1f4688b95597\": rpc error: code = NotFound desc = could not find container \"6478175f7fb7b65e92f7bedd7fe5e07c34f9eb76778683893ebb1f4688b95597\": container with ID starting with 6478175f7fb7b65e92f7bedd7fe5e07c34f9eb76778683893ebb1f4688b95597 not found: ID does not exist" Dec 06 07:13:35 crc kubenswrapper[4823]: I1206 07:13:35.836557 4823 scope.go:117] "RemoveContainer" containerID="c0bb60651459d8703468b2616795abd48c2e1cf2d0f1270805375b141a139413" Dec 06 07:13:35 crc kubenswrapper[4823]: E1206 07:13:35.837206 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0bb60651459d8703468b2616795abd48c2e1cf2d0f1270805375b141a139413\": container with ID starting with c0bb60651459d8703468b2616795abd48c2e1cf2d0f1270805375b141a139413 not found: ID does not exist" containerID="c0bb60651459d8703468b2616795abd48c2e1cf2d0f1270805375b141a139413" Dec 06 07:13:35 crc kubenswrapper[4823]: I1206 07:13:35.837395 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0bb60651459d8703468b2616795abd48c2e1cf2d0f1270805375b141a139413"} err="failed to get container status \"c0bb60651459d8703468b2616795abd48c2e1cf2d0f1270805375b141a139413\": rpc error: code = NotFound desc = could not find container \"c0bb60651459d8703468b2616795abd48c2e1cf2d0f1270805375b141a139413\": container with ID starting with c0bb60651459d8703468b2616795abd48c2e1cf2d0f1270805375b141a139413 not found: ID does not exist" Dec 06 07:13:35 crc kubenswrapper[4823]: I1206 07:13:35.837564 4823 scope.go:117] "RemoveContainer" containerID="781dad1cf5b3c89c9941ebf9e468e2823f2f6809fb3b2ff888c87bad2b4dd054" Dec 06 07:13:35 crc kubenswrapper[4823]: E1206 07:13:35.838340 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"781dad1cf5b3c89c9941ebf9e468e2823f2f6809fb3b2ff888c87bad2b4dd054\": container with ID starting with 781dad1cf5b3c89c9941ebf9e468e2823f2f6809fb3b2ff888c87bad2b4dd054 not found: ID does not exist" containerID="781dad1cf5b3c89c9941ebf9e468e2823f2f6809fb3b2ff888c87bad2b4dd054" Dec 06 07:13:35 crc kubenswrapper[4823]: I1206 07:13:35.838380 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"781dad1cf5b3c89c9941ebf9e468e2823f2f6809fb3b2ff888c87bad2b4dd054"} err="failed to get container status \"781dad1cf5b3c89c9941ebf9e468e2823f2f6809fb3b2ff888c87bad2b4dd054\": rpc error: code = NotFound desc = could not find container \"781dad1cf5b3c89c9941ebf9e468e2823f2f6809fb3b2ff888c87bad2b4dd054\": container with ID starting with 781dad1cf5b3c89c9941ebf9e468e2823f2f6809fb3b2ff888c87bad2b4dd054 not found: ID does not exist" Dec 06 07:13:36 crc kubenswrapper[4823]: I1206 07:13:36.051289 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:13:36 crc kubenswrapper[4823]: I1206 07:13:36.051346 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:13:37 crc kubenswrapper[4823]: I1206 07:13:37.321741 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="355d75fc-594a-4495-b678-ac153628eff8" path="/var/lib/kubelet/pods/355d75fc-594a-4495-b678-ac153628eff8/volumes" Dec 06 07:13:40 crc kubenswrapper[4823]: I1206 07:13:40.130610 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gw7vl"] Dec 06 07:13:40 crc kubenswrapper[4823]: E1206 07:13:40.131631 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="355d75fc-594a-4495-b678-ac153628eff8" containerName="registry-server" Dec 06 07:13:40 crc kubenswrapper[4823]: I1206 07:13:40.131650 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="355d75fc-594a-4495-b678-ac153628eff8" containerName="registry-server" Dec 06 07:13:40 crc kubenswrapper[4823]: E1206 07:13:40.131704 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="355d75fc-594a-4495-b678-ac153628eff8" containerName="extract-utilities" Dec 06 07:13:40 crc kubenswrapper[4823]: I1206 07:13:40.131711 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="355d75fc-594a-4495-b678-ac153628eff8" containerName="extract-utilities" Dec 06 07:13:40 crc kubenswrapper[4823]: E1206 07:13:40.131726 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="355d75fc-594a-4495-b678-ac153628eff8" containerName="extract-content" Dec 06 07:13:40 crc kubenswrapper[4823]: I1206 07:13:40.131733 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="355d75fc-594a-4495-b678-ac153628eff8" containerName="extract-content" Dec 06 07:13:40 crc kubenswrapper[4823]: I1206 07:13:40.131955 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="355d75fc-594a-4495-b678-ac153628eff8" containerName="registry-server" Dec 06 07:13:40 crc kubenswrapper[4823]: I1206 07:13:40.133498 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gw7vl" Dec 06 07:13:40 crc kubenswrapper[4823]: I1206 07:13:40.145156 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gw7vl"] Dec 06 07:13:40 crc kubenswrapper[4823]: I1206 07:13:40.283347 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29trp\" (UniqueName: \"kubernetes.io/projected/f14f7e30-f8ce-4679-9959-0646b72311e6-kube-api-access-29trp\") pod \"community-operators-gw7vl\" (UID: \"f14f7e30-f8ce-4679-9959-0646b72311e6\") " pod="openshift-marketplace/community-operators-gw7vl" Dec 06 07:13:40 crc kubenswrapper[4823]: I1206 07:13:40.283441 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f14f7e30-f8ce-4679-9959-0646b72311e6-utilities\") pod \"community-operators-gw7vl\" (UID: \"f14f7e30-f8ce-4679-9959-0646b72311e6\") " pod="openshift-marketplace/community-operators-gw7vl" Dec 06 07:13:40 crc kubenswrapper[4823]: I1206 07:13:40.283795 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f14f7e30-f8ce-4679-9959-0646b72311e6-catalog-content\") pod \"community-operators-gw7vl\" (UID: \"f14f7e30-f8ce-4679-9959-0646b72311e6\") " pod="openshift-marketplace/community-operators-gw7vl" Dec 06 07:13:40 crc kubenswrapper[4823]: I1206 07:13:40.385318 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f14f7e30-f8ce-4679-9959-0646b72311e6-utilities\") pod \"community-operators-gw7vl\" (UID: \"f14f7e30-f8ce-4679-9959-0646b72311e6\") " pod="openshift-marketplace/community-operators-gw7vl" Dec 06 07:13:40 crc kubenswrapper[4823]: I1206 07:13:40.385503 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f14f7e30-f8ce-4679-9959-0646b72311e6-catalog-content\") pod \"community-operators-gw7vl\" (UID: \"f14f7e30-f8ce-4679-9959-0646b72311e6\") " pod="openshift-marketplace/community-operators-gw7vl" Dec 06 07:13:40 crc kubenswrapper[4823]: I1206 07:13:40.385542 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29trp\" (UniqueName: \"kubernetes.io/projected/f14f7e30-f8ce-4679-9959-0646b72311e6-kube-api-access-29trp\") pod \"community-operators-gw7vl\" (UID: \"f14f7e30-f8ce-4679-9959-0646b72311e6\") " pod="openshift-marketplace/community-operators-gw7vl" Dec 06 07:13:40 crc kubenswrapper[4823]: I1206 07:13:40.386356 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f14f7e30-f8ce-4679-9959-0646b72311e6-utilities\") pod \"community-operators-gw7vl\" (UID: \"f14f7e30-f8ce-4679-9959-0646b72311e6\") " pod="openshift-marketplace/community-operators-gw7vl" Dec 06 07:13:40 crc kubenswrapper[4823]: I1206 07:13:40.386588 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f14f7e30-f8ce-4679-9959-0646b72311e6-catalog-content\") pod \"community-operators-gw7vl\" (UID: \"f14f7e30-f8ce-4679-9959-0646b72311e6\") " pod="openshift-marketplace/community-operators-gw7vl" Dec 06 07:13:40 crc kubenswrapper[4823]: I1206 07:13:40.407731 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29trp\" (UniqueName: \"kubernetes.io/projected/f14f7e30-f8ce-4679-9959-0646b72311e6-kube-api-access-29trp\") pod \"community-operators-gw7vl\" (UID: \"f14f7e30-f8ce-4679-9959-0646b72311e6\") " pod="openshift-marketplace/community-operators-gw7vl" Dec 06 07:13:40 crc kubenswrapper[4823]: I1206 07:13:40.464341 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gw7vl" Dec 06 07:13:41 crc kubenswrapper[4823]: I1206 07:13:41.000733 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xjqvm"] Dec 06 07:13:41 crc kubenswrapper[4823]: I1206 07:13:41.005689 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xjqvm" Dec 06 07:13:41 crc kubenswrapper[4823]: I1206 07:13:41.021626 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjqvm"] Dec 06 07:13:41 crc kubenswrapper[4823]: I1206 07:13:41.059015 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gw7vl"] Dec 06 07:13:41 crc kubenswrapper[4823]: I1206 07:13:41.202290 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kfrn\" (UniqueName: \"kubernetes.io/projected/5965557f-7d19-41bb-880f-204318350852-kube-api-access-7kfrn\") pod \"redhat-marketplace-xjqvm\" (UID: \"5965557f-7d19-41bb-880f-204318350852\") " pod="openshift-marketplace/redhat-marketplace-xjqvm" Dec 06 07:13:41 crc kubenswrapper[4823]: I1206 07:13:41.202642 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5965557f-7d19-41bb-880f-204318350852-utilities\") pod \"redhat-marketplace-xjqvm\" (UID: \"5965557f-7d19-41bb-880f-204318350852\") " pod="openshift-marketplace/redhat-marketplace-xjqvm" Dec 06 07:13:41 crc kubenswrapper[4823]: I1206 07:13:41.202768 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5965557f-7d19-41bb-880f-204318350852-catalog-content\") pod \"redhat-marketplace-xjqvm\" (UID: \"5965557f-7d19-41bb-880f-204318350852\") " pod="openshift-marketplace/redhat-marketplace-xjqvm" Dec 06 07:13:41 crc kubenswrapper[4823]: I1206 07:13:41.305248 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5965557f-7d19-41bb-880f-204318350852-utilities\") pod \"redhat-marketplace-xjqvm\" (UID: \"5965557f-7d19-41bb-880f-204318350852\") " pod="openshift-marketplace/redhat-marketplace-xjqvm" Dec 06 07:13:41 crc kubenswrapper[4823]: I1206 07:13:41.305291 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kfrn\" (UniqueName: \"kubernetes.io/projected/5965557f-7d19-41bb-880f-204318350852-kube-api-access-7kfrn\") pod \"redhat-marketplace-xjqvm\" (UID: \"5965557f-7d19-41bb-880f-204318350852\") " pod="openshift-marketplace/redhat-marketplace-xjqvm" Dec 06 07:13:41 crc kubenswrapper[4823]: I1206 07:13:41.305319 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5965557f-7d19-41bb-880f-204318350852-catalog-content\") pod \"redhat-marketplace-xjqvm\" (UID: \"5965557f-7d19-41bb-880f-204318350852\") " pod="openshift-marketplace/redhat-marketplace-xjqvm" Dec 06 07:13:41 crc kubenswrapper[4823]: I1206 07:13:41.305685 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5965557f-7d19-41bb-880f-204318350852-utilities\") pod \"redhat-marketplace-xjqvm\" (UID: \"5965557f-7d19-41bb-880f-204318350852\") " pod="openshift-marketplace/redhat-marketplace-xjqvm" Dec 06 07:13:41 crc kubenswrapper[4823]: I1206 07:13:41.305733 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5965557f-7d19-41bb-880f-204318350852-catalog-content\") pod \"redhat-marketplace-xjqvm\" (UID: \"5965557f-7d19-41bb-880f-204318350852\") " pod="openshift-marketplace/redhat-marketplace-xjqvm" Dec 06 07:13:41 crc kubenswrapper[4823]: I1206 07:13:41.327065 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kfrn\" (UniqueName: \"kubernetes.io/projected/5965557f-7d19-41bb-880f-204318350852-kube-api-access-7kfrn\") pod \"redhat-marketplace-xjqvm\" (UID: \"5965557f-7d19-41bb-880f-204318350852\") " pod="openshift-marketplace/redhat-marketplace-xjqvm" Dec 06 07:13:41 crc kubenswrapper[4823]: I1206 07:13:41.626888 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xjqvm" Dec 06 07:13:41 crc kubenswrapper[4823]: I1206 07:13:41.772657 4823 generic.go:334] "Generic (PLEG): container finished" podID="f14f7e30-f8ce-4679-9959-0646b72311e6" containerID="ecddd860544ac177809998f8d57b8b8c7fdd42c6632fdf52e975c31f3f119dac" exitCode=0 Dec 06 07:13:41 crc kubenswrapper[4823]: I1206 07:13:41.772766 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gw7vl" event={"ID":"f14f7e30-f8ce-4679-9959-0646b72311e6","Type":"ContainerDied","Data":"ecddd860544ac177809998f8d57b8b8c7fdd42c6632fdf52e975c31f3f119dac"} Dec 06 07:13:41 crc kubenswrapper[4823]: I1206 07:13:41.773035 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gw7vl" event={"ID":"f14f7e30-f8ce-4679-9959-0646b72311e6","Type":"ContainerStarted","Data":"da355e5df00b194e4e6176f70c857cedda68f3b4fe0a8b88f7cc02f0fdde73eb"} Dec 06 07:13:42 crc kubenswrapper[4823]: I1206 07:13:42.159884 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjqvm"] Dec 06 07:13:42 crc kubenswrapper[4823]: W1206 07:13:42.161296 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5965557f_7d19_41bb_880f_204318350852.slice/crio-643cfaf9f21396f2325ac7d1a214fa5a8688bf47424f1920af4ad3749ce391ae WatchSource:0}: Error finding container 643cfaf9f21396f2325ac7d1a214fa5a8688bf47424f1920af4ad3749ce391ae: Status 404 returned error can't find the container with id 643cfaf9f21396f2325ac7d1a214fa5a8688bf47424f1920af4ad3749ce391ae Dec 06 07:13:42 crc kubenswrapper[4823]: I1206 07:13:42.798296 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gw7vl" event={"ID":"f14f7e30-f8ce-4679-9959-0646b72311e6","Type":"ContainerStarted","Data":"08a8820dd2fa9798c06edee9b918a6cf0f77975c13949018b5ef300736765cc5"} Dec 06 07:13:42 crc kubenswrapper[4823]: I1206 07:13:42.801811 4823 generic.go:334] "Generic (PLEG): container finished" podID="5965557f-7d19-41bb-880f-204318350852" containerID="c9054159f2f1c5acb3c6ae5a7997f4bbfff0c2cc28e88e24f3e03f86a639f575" exitCode=0 Dec 06 07:13:42 crc kubenswrapper[4823]: I1206 07:13:42.801873 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjqvm" event={"ID":"5965557f-7d19-41bb-880f-204318350852","Type":"ContainerDied","Data":"c9054159f2f1c5acb3c6ae5a7997f4bbfff0c2cc28e88e24f3e03f86a639f575"} Dec 06 07:13:42 crc kubenswrapper[4823]: I1206 07:13:42.801915 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjqvm" event={"ID":"5965557f-7d19-41bb-880f-204318350852","Type":"ContainerStarted","Data":"643cfaf9f21396f2325ac7d1a214fa5a8688bf47424f1920af4ad3749ce391ae"} Dec 06 07:13:43 crc kubenswrapper[4823]: I1206 07:13:43.813946 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjqvm" event={"ID":"5965557f-7d19-41bb-880f-204318350852","Type":"ContainerStarted","Data":"20097590557ffe0682ef187fbd8f9f268755d6b2927e9e1534afe86112c2783b"} Dec 06 07:13:43 crc kubenswrapper[4823]: I1206 07:13:43.817242 4823 generic.go:334] "Generic (PLEG): container finished" podID="f14f7e30-f8ce-4679-9959-0646b72311e6" containerID="08a8820dd2fa9798c06edee9b918a6cf0f77975c13949018b5ef300736765cc5" exitCode=0 Dec 06 07:13:43 crc kubenswrapper[4823]: I1206 07:13:43.817302 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gw7vl" event={"ID":"f14f7e30-f8ce-4679-9959-0646b72311e6","Type":"ContainerDied","Data":"08a8820dd2fa9798c06edee9b918a6cf0f77975c13949018b5ef300736765cc5"} Dec 06 07:13:44 crc kubenswrapper[4823]: I1206 07:13:44.829231 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gw7vl" event={"ID":"f14f7e30-f8ce-4679-9959-0646b72311e6","Type":"ContainerStarted","Data":"fab393008f340e5891344e599bd5db97f9bc396106476538704af6133c4a40f9"} Dec 06 07:13:44 crc kubenswrapper[4823]: I1206 07:13:44.916433 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gw7vl" podStartSLOduration=2.212982611 podStartE2EDuration="4.91641481s" podCreationTimestamp="2025-12-06 07:13:40 +0000 UTC" firstStartedPulling="2025-12-06 07:13:41.77467813 +0000 UTC m=+2923.060430090" lastFinishedPulling="2025-12-06 07:13:44.478110329 +0000 UTC m=+2925.763862289" observedRunningTime="2025-12-06 07:13:44.907645915 +0000 UTC m=+2926.193397875" watchObservedRunningTime="2025-12-06 07:13:44.91641481 +0000 UTC m=+2926.202166770" Dec 06 07:13:45 crc kubenswrapper[4823]: I1206 07:13:45.846343 4823 generic.go:334] "Generic (PLEG): container finished" podID="5965557f-7d19-41bb-880f-204318350852" containerID="20097590557ffe0682ef187fbd8f9f268755d6b2927e9e1534afe86112c2783b" exitCode=0 Dec 06 07:13:45 crc kubenswrapper[4823]: I1206 07:13:45.846410 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjqvm" event={"ID":"5965557f-7d19-41bb-880f-204318350852","Type":"ContainerDied","Data":"20097590557ffe0682ef187fbd8f9f268755d6b2927e9e1534afe86112c2783b"} Dec 06 07:13:47 crc kubenswrapper[4823]: I1206 07:13:47.872944 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjqvm" event={"ID":"5965557f-7d19-41bb-880f-204318350852","Type":"ContainerStarted","Data":"49264f60110b70df7f549c533f5b67321df0243eed150182a0260936785a248a"} Dec 06 07:13:47 crc kubenswrapper[4823]: I1206 07:13:47.893703 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xjqvm" podStartSLOduration=3.953892355 podStartE2EDuration="7.893636391s" podCreationTimestamp="2025-12-06 07:13:40 +0000 UTC" firstStartedPulling="2025-12-06 07:13:42.803821047 +0000 UTC m=+2924.089573007" lastFinishedPulling="2025-12-06 07:13:46.743565083 +0000 UTC m=+2928.029317043" observedRunningTime="2025-12-06 07:13:47.8902019 +0000 UTC m=+2929.175953860" watchObservedRunningTime="2025-12-06 07:13:47.893636391 +0000 UTC m=+2929.179388351" Dec 06 07:13:50 crc kubenswrapper[4823]: I1206 07:13:50.466078 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gw7vl" Dec 06 07:13:50 crc kubenswrapper[4823]: I1206 07:13:50.466557 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gw7vl" Dec 06 07:13:50 crc kubenswrapper[4823]: I1206 07:13:50.524785 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gw7vl" Dec 06 07:13:50 crc kubenswrapper[4823]: I1206 07:13:50.943924 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gw7vl" Dec 06 07:13:50 crc kubenswrapper[4823]: I1206 07:13:50.998900 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gw7vl"] Dec 06 07:13:51 crc kubenswrapper[4823]: I1206 07:13:51.627726 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xjqvm" Dec 06 07:13:51 crc kubenswrapper[4823]: I1206 07:13:51.627793 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xjqvm" Dec 06 07:13:51 crc kubenswrapper[4823]: I1206 07:13:51.681189 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xjqvm" Dec 06 07:13:52 crc kubenswrapper[4823]: I1206 07:13:52.936191 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gw7vl" podUID="f14f7e30-f8ce-4679-9959-0646b72311e6" containerName="registry-server" containerID="cri-o://fab393008f340e5891344e599bd5db97f9bc396106476538704af6133c4a40f9" gracePeriod=2 Dec 06 07:13:53 crc kubenswrapper[4823]: I1206 07:13:53.452916 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gw7vl" Dec 06 07:13:53 crc kubenswrapper[4823]: I1206 07:13:53.582114 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29trp\" (UniqueName: \"kubernetes.io/projected/f14f7e30-f8ce-4679-9959-0646b72311e6-kube-api-access-29trp\") pod \"f14f7e30-f8ce-4679-9959-0646b72311e6\" (UID: \"f14f7e30-f8ce-4679-9959-0646b72311e6\") " Dec 06 07:13:53 crc kubenswrapper[4823]: I1206 07:13:53.582349 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f14f7e30-f8ce-4679-9959-0646b72311e6-catalog-content\") pod \"f14f7e30-f8ce-4679-9959-0646b72311e6\" (UID: \"f14f7e30-f8ce-4679-9959-0646b72311e6\") " Dec 06 07:13:53 crc kubenswrapper[4823]: I1206 07:13:53.582464 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f14f7e30-f8ce-4679-9959-0646b72311e6-utilities\") pod \"f14f7e30-f8ce-4679-9959-0646b72311e6\" (UID: \"f14f7e30-f8ce-4679-9959-0646b72311e6\") " Dec 06 07:13:53 crc kubenswrapper[4823]: I1206 07:13:53.583129 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f14f7e30-f8ce-4679-9959-0646b72311e6-utilities" (OuterVolumeSpecName: "utilities") pod "f14f7e30-f8ce-4679-9959-0646b72311e6" (UID: "f14f7e30-f8ce-4679-9959-0646b72311e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:13:53 crc kubenswrapper[4823]: I1206 07:13:53.594143 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f14f7e30-f8ce-4679-9959-0646b72311e6-kube-api-access-29trp" (OuterVolumeSpecName: "kube-api-access-29trp") pod "f14f7e30-f8ce-4679-9959-0646b72311e6" (UID: "f14f7e30-f8ce-4679-9959-0646b72311e6"). InnerVolumeSpecName "kube-api-access-29trp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:13:53 crc kubenswrapper[4823]: I1206 07:13:53.639281 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f14f7e30-f8ce-4679-9959-0646b72311e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f14f7e30-f8ce-4679-9959-0646b72311e6" (UID: "f14f7e30-f8ce-4679-9959-0646b72311e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:13:53 crc kubenswrapper[4823]: I1206 07:13:53.684821 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f14f7e30-f8ce-4679-9959-0646b72311e6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:53 crc kubenswrapper[4823]: I1206 07:13:53.684858 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f14f7e30-f8ce-4679-9959-0646b72311e6-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:53 crc kubenswrapper[4823]: I1206 07:13:53.684869 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29trp\" (UniqueName: \"kubernetes.io/projected/f14f7e30-f8ce-4679-9959-0646b72311e6-kube-api-access-29trp\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:53 crc kubenswrapper[4823]: I1206 07:13:53.952355 4823 generic.go:334] "Generic (PLEG): container finished" podID="f14f7e30-f8ce-4679-9959-0646b72311e6" containerID="fab393008f340e5891344e599bd5db97f9bc396106476538704af6133c4a40f9" exitCode=0 Dec 06 07:13:53 crc kubenswrapper[4823]: I1206 07:13:53.952411 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gw7vl" event={"ID":"f14f7e30-f8ce-4679-9959-0646b72311e6","Type":"ContainerDied","Data":"fab393008f340e5891344e599bd5db97f9bc396106476538704af6133c4a40f9"} Dec 06 07:13:53 crc kubenswrapper[4823]: I1206 07:13:53.952442 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gw7vl" event={"ID":"f14f7e30-f8ce-4679-9959-0646b72311e6","Type":"ContainerDied","Data":"da355e5df00b194e4e6176f70c857cedda68f3b4fe0a8b88f7cc02f0fdde73eb"} Dec 06 07:13:53 crc kubenswrapper[4823]: I1206 07:13:53.952464 4823 scope.go:117] "RemoveContainer" containerID="fab393008f340e5891344e599bd5db97f9bc396106476538704af6133c4a40f9" Dec 06 07:13:53 crc kubenswrapper[4823]: I1206 07:13:53.952615 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gw7vl" Dec 06 07:13:53 crc kubenswrapper[4823]: I1206 07:13:53.989877 4823 scope.go:117] "RemoveContainer" containerID="08a8820dd2fa9798c06edee9b918a6cf0f77975c13949018b5ef300736765cc5" Dec 06 07:13:53 crc kubenswrapper[4823]: I1206 07:13:53.998569 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gw7vl"] Dec 06 07:13:54 crc kubenswrapper[4823]: I1206 07:13:54.009059 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gw7vl"] Dec 06 07:13:54 crc kubenswrapper[4823]: I1206 07:13:54.021215 4823 scope.go:117] "RemoveContainer" containerID="ecddd860544ac177809998f8d57b8b8c7fdd42c6632fdf52e975c31f3f119dac" Dec 06 07:13:54 crc kubenswrapper[4823]: I1206 07:13:54.069475 4823 scope.go:117] "RemoveContainer" containerID="fab393008f340e5891344e599bd5db97f9bc396106476538704af6133c4a40f9" Dec 06 07:13:54 crc kubenswrapper[4823]: E1206 07:13:54.071792 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fab393008f340e5891344e599bd5db97f9bc396106476538704af6133c4a40f9\": container with ID starting with fab393008f340e5891344e599bd5db97f9bc396106476538704af6133c4a40f9 not found: ID does not exist" containerID="fab393008f340e5891344e599bd5db97f9bc396106476538704af6133c4a40f9" Dec 06 07:13:54 crc kubenswrapper[4823]: I1206 07:13:54.071935 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fab393008f340e5891344e599bd5db97f9bc396106476538704af6133c4a40f9"} err="failed to get container status \"fab393008f340e5891344e599bd5db97f9bc396106476538704af6133c4a40f9\": rpc error: code = NotFound desc = could not find container \"fab393008f340e5891344e599bd5db97f9bc396106476538704af6133c4a40f9\": container with ID starting with fab393008f340e5891344e599bd5db97f9bc396106476538704af6133c4a40f9 not found: ID does not exist" Dec 06 07:13:54 crc kubenswrapper[4823]: I1206 07:13:54.072034 4823 scope.go:117] "RemoveContainer" containerID="08a8820dd2fa9798c06edee9b918a6cf0f77975c13949018b5ef300736765cc5" Dec 06 07:13:54 crc kubenswrapper[4823]: E1206 07:13:54.072460 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08a8820dd2fa9798c06edee9b918a6cf0f77975c13949018b5ef300736765cc5\": container with ID starting with 08a8820dd2fa9798c06edee9b918a6cf0f77975c13949018b5ef300736765cc5 not found: ID does not exist" containerID="08a8820dd2fa9798c06edee9b918a6cf0f77975c13949018b5ef300736765cc5" Dec 06 07:13:54 crc kubenswrapper[4823]: I1206 07:13:54.072490 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08a8820dd2fa9798c06edee9b918a6cf0f77975c13949018b5ef300736765cc5"} err="failed to get container status \"08a8820dd2fa9798c06edee9b918a6cf0f77975c13949018b5ef300736765cc5\": rpc error: code = NotFound desc = could not find container \"08a8820dd2fa9798c06edee9b918a6cf0f77975c13949018b5ef300736765cc5\": container with ID starting with 08a8820dd2fa9798c06edee9b918a6cf0f77975c13949018b5ef300736765cc5 not found: ID does not exist" Dec 06 07:13:54 crc kubenswrapper[4823]: I1206 07:13:54.072508 4823 scope.go:117] "RemoveContainer" containerID="ecddd860544ac177809998f8d57b8b8c7fdd42c6632fdf52e975c31f3f119dac" Dec 06 07:13:54 crc kubenswrapper[4823]: E1206 07:13:54.072802 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecddd860544ac177809998f8d57b8b8c7fdd42c6632fdf52e975c31f3f119dac\": container with ID starting with ecddd860544ac177809998f8d57b8b8c7fdd42c6632fdf52e975c31f3f119dac not found: ID does not exist" containerID="ecddd860544ac177809998f8d57b8b8c7fdd42c6632fdf52e975c31f3f119dac" Dec 06 07:13:54 crc kubenswrapper[4823]: I1206 07:13:54.072826 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecddd860544ac177809998f8d57b8b8c7fdd42c6632fdf52e975c31f3f119dac"} err="failed to get container status \"ecddd860544ac177809998f8d57b8b8c7fdd42c6632fdf52e975c31f3f119dac\": rpc error: code = NotFound desc = could not find container \"ecddd860544ac177809998f8d57b8b8c7fdd42c6632fdf52e975c31f3f119dac\": container with ID starting with ecddd860544ac177809998f8d57b8b8c7fdd42c6632fdf52e975c31f3f119dac not found: ID does not exist" Dec 06 07:13:55 crc kubenswrapper[4823]: I1206 07:13:55.152159 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f14f7e30-f8ce-4679-9959-0646b72311e6" path="/var/lib/kubelet/pods/f14f7e30-f8ce-4679-9959-0646b72311e6/volumes" Dec 06 07:14:01 crc kubenswrapper[4823]: I1206 07:14:01.674090 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xjqvm" Dec 06 07:14:01 crc kubenswrapper[4823]: I1206 07:14:01.730002 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjqvm"] Dec 06 07:14:02 crc kubenswrapper[4823]: I1206 07:14:02.023899 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xjqvm" podUID="5965557f-7d19-41bb-880f-204318350852" containerName="registry-server" containerID="cri-o://49264f60110b70df7f549c533f5b67321df0243eed150182a0260936785a248a" gracePeriod=2 Dec 06 07:14:02 crc kubenswrapper[4823]: I1206 07:14:02.507594 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xjqvm" Dec 06 07:14:02 crc kubenswrapper[4823]: I1206 07:14:02.636359 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kfrn\" (UniqueName: \"kubernetes.io/projected/5965557f-7d19-41bb-880f-204318350852-kube-api-access-7kfrn\") pod \"5965557f-7d19-41bb-880f-204318350852\" (UID: \"5965557f-7d19-41bb-880f-204318350852\") " Dec 06 07:14:02 crc kubenswrapper[4823]: I1206 07:14:02.636709 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5965557f-7d19-41bb-880f-204318350852-utilities\") pod \"5965557f-7d19-41bb-880f-204318350852\" (UID: \"5965557f-7d19-41bb-880f-204318350852\") " Dec 06 07:14:02 crc kubenswrapper[4823]: I1206 07:14:02.637008 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5965557f-7d19-41bb-880f-204318350852-catalog-content\") pod \"5965557f-7d19-41bb-880f-204318350852\" (UID: \"5965557f-7d19-41bb-880f-204318350852\") " Dec 06 07:14:02 crc kubenswrapper[4823]: I1206 07:14:02.637521 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5965557f-7d19-41bb-880f-204318350852-utilities" (OuterVolumeSpecName: "utilities") pod "5965557f-7d19-41bb-880f-204318350852" (UID: "5965557f-7d19-41bb-880f-204318350852"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:14:02 crc kubenswrapper[4823]: I1206 07:14:02.637953 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5965557f-7d19-41bb-880f-204318350852-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:14:02 crc kubenswrapper[4823]: I1206 07:14:02.642614 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5965557f-7d19-41bb-880f-204318350852-kube-api-access-7kfrn" (OuterVolumeSpecName: "kube-api-access-7kfrn") pod "5965557f-7d19-41bb-880f-204318350852" (UID: "5965557f-7d19-41bb-880f-204318350852"). InnerVolumeSpecName "kube-api-access-7kfrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:14:02 crc kubenswrapper[4823]: I1206 07:14:02.667038 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5965557f-7d19-41bb-880f-204318350852-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5965557f-7d19-41bb-880f-204318350852" (UID: "5965557f-7d19-41bb-880f-204318350852"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:14:02 crc kubenswrapper[4823]: I1206 07:14:02.739614 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kfrn\" (UniqueName: \"kubernetes.io/projected/5965557f-7d19-41bb-880f-204318350852-kube-api-access-7kfrn\") on node \"crc\" DevicePath \"\"" Dec 06 07:14:02 crc kubenswrapper[4823]: I1206 07:14:02.740573 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5965557f-7d19-41bb-880f-204318350852-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:14:03 crc kubenswrapper[4823]: I1206 07:14:03.035444 4823 generic.go:334] "Generic (PLEG): container finished" podID="5965557f-7d19-41bb-880f-204318350852" containerID="49264f60110b70df7f549c533f5b67321df0243eed150182a0260936785a248a" exitCode=0 Dec 06 07:14:03 crc kubenswrapper[4823]: I1206 07:14:03.035508 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xjqvm" Dec 06 07:14:03 crc kubenswrapper[4823]: I1206 07:14:03.035528 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjqvm" event={"ID":"5965557f-7d19-41bb-880f-204318350852","Type":"ContainerDied","Data":"49264f60110b70df7f549c533f5b67321df0243eed150182a0260936785a248a"} Dec 06 07:14:03 crc kubenswrapper[4823]: I1206 07:14:03.035952 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjqvm" event={"ID":"5965557f-7d19-41bb-880f-204318350852","Type":"ContainerDied","Data":"643cfaf9f21396f2325ac7d1a214fa5a8688bf47424f1920af4ad3749ce391ae"} Dec 06 07:14:03 crc kubenswrapper[4823]: I1206 07:14:03.035975 4823 scope.go:117] "RemoveContainer" containerID="49264f60110b70df7f549c533f5b67321df0243eed150182a0260936785a248a" Dec 06 07:14:03 crc kubenswrapper[4823]: I1206 07:14:03.063881 4823 scope.go:117] "RemoveContainer" containerID="20097590557ffe0682ef187fbd8f9f268755d6b2927e9e1534afe86112c2783b" Dec 06 07:14:03 crc kubenswrapper[4823]: I1206 07:14:03.074567 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjqvm"] Dec 06 07:14:03 crc kubenswrapper[4823]: I1206 07:14:03.083787 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjqvm"] Dec 06 07:14:03 crc kubenswrapper[4823]: I1206 07:14:03.103893 4823 scope.go:117] "RemoveContainer" containerID="c9054159f2f1c5acb3c6ae5a7997f4bbfff0c2cc28e88e24f3e03f86a639f575" Dec 06 07:14:03 crc kubenswrapper[4823]: I1206 07:14:03.132090 4823 scope.go:117] "RemoveContainer" containerID="49264f60110b70df7f549c533f5b67321df0243eed150182a0260936785a248a" Dec 06 07:14:03 crc kubenswrapper[4823]: E1206 07:14:03.132704 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49264f60110b70df7f549c533f5b67321df0243eed150182a0260936785a248a\": container with ID starting with 49264f60110b70df7f549c533f5b67321df0243eed150182a0260936785a248a not found: ID does not exist" containerID="49264f60110b70df7f549c533f5b67321df0243eed150182a0260936785a248a" Dec 06 07:14:03 crc kubenswrapper[4823]: I1206 07:14:03.132737 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49264f60110b70df7f549c533f5b67321df0243eed150182a0260936785a248a"} err="failed to get container status \"49264f60110b70df7f549c533f5b67321df0243eed150182a0260936785a248a\": rpc error: code = NotFound desc = could not find container \"49264f60110b70df7f549c533f5b67321df0243eed150182a0260936785a248a\": container with ID starting with 49264f60110b70df7f549c533f5b67321df0243eed150182a0260936785a248a not found: ID does not exist" Dec 06 07:14:03 crc kubenswrapper[4823]: I1206 07:14:03.132758 4823 scope.go:117] "RemoveContainer" containerID="20097590557ffe0682ef187fbd8f9f268755d6b2927e9e1534afe86112c2783b" Dec 06 07:14:03 crc kubenswrapper[4823]: E1206 07:14:03.133040 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20097590557ffe0682ef187fbd8f9f268755d6b2927e9e1534afe86112c2783b\": container with ID starting with 20097590557ffe0682ef187fbd8f9f268755d6b2927e9e1534afe86112c2783b not found: ID does not exist" containerID="20097590557ffe0682ef187fbd8f9f268755d6b2927e9e1534afe86112c2783b" Dec 06 07:14:03 crc kubenswrapper[4823]: I1206 07:14:03.133094 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20097590557ffe0682ef187fbd8f9f268755d6b2927e9e1534afe86112c2783b"} err="failed to get container status \"20097590557ffe0682ef187fbd8f9f268755d6b2927e9e1534afe86112c2783b\": rpc error: code = NotFound desc = could not find container \"20097590557ffe0682ef187fbd8f9f268755d6b2927e9e1534afe86112c2783b\": container with ID starting with 20097590557ffe0682ef187fbd8f9f268755d6b2927e9e1534afe86112c2783b not found: ID does not exist" Dec 06 07:14:03 crc kubenswrapper[4823]: I1206 07:14:03.133115 4823 scope.go:117] "RemoveContainer" containerID="c9054159f2f1c5acb3c6ae5a7997f4bbfff0c2cc28e88e24f3e03f86a639f575" Dec 06 07:14:03 crc kubenswrapper[4823]: E1206 07:14:03.133422 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9054159f2f1c5acb3c6ae5a7997f4bbfff0c2cc28e88e24f3e03f86a639f575\": container with ID starting with c9054159f2f1c5acb3c6ae5a7997f4bbfff0c2cc28e88e24f3e03f86a639f575 not found: ID does not exist" containerID="c9054159f2f1c5acb3c6ae5a7997f4bbfff0c2cc28e88e24f3e03f86a639f575" Dec 06 07:14:03 crc kubenswrapper[4823]: I1206 07:14:03.133496 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9054159f2f1c5acb3c6ae5a7997f4bbfff0c2cc28e88e24f3e03f86a639f575"} err="failed to get container status \"c9054159f2f1c5acb3c6ae5a7997f4bbfff0c2cc28e88e24f3e03f86a639f575\": rpc error: code = NotFound desc = could not find container \"c9054159f2f1c5acb3c6ae5a7997f4bbfff0c2cc28e88e24f3e03f86a639f575\": container with ID starting with c9054159f2f1c5acb3c6ae5a7997f4bbfff0c2cc28e88e24f3e03f86a639f575 not found: ID does not exist" Dec 06 07:14:03 crc kubenswrapper[4823]: I1206 07:14:03.154031 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5965557f-7d19-41bb-880f-204318350852" path="/var/lib/kubelet/pods/5965557f-7d19-41bb-880f-204318350852/volumes" Dec 06 07:14:06 crc kubenswrapper[4823]: I1206 07:14:06.052244 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:14:06 crc kubenswrapper[4823]: I1206 07:14:06.052572 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:14:32 crc kubenswrapper[4823]: I1206 07:14:32.397044 4823 generic.go:334] "Generic (PLEG): container finished" podID="b7b49501-c951-4829-8791-c27d6e01a606" containerID="8b401a9fadf8877253f1e61382cc87a7252c5eebdf2f9cad119f129b735c66ba" exitCode=0 Dec 06 07:14:32 crc kubenswrapper[4823]: I1206 07:14:32.397130 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" event={"ID":"b7b49501-c951-4829-8791-c27d6e01a606","Type":"ContainerDied","Data":"8b401a9fadf8877253f1e61382cc87a7252c5eebdf2f9cad119f129b735c66ba"} Dec 06 07:14:33 crc kubenswrapper[4823]: I1206 07:14:33.889775 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" Dec 06 07:14:33 crc kubenswrapper[4823]: I1206 07:14:33.950566 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-ceilometer-compute-config-data-0\") pod \"b7b49501-c951-4829-8791-c27d6e01a606\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " Dec 06 07:14:33 crc kubenswrapper[4823]: I1206 07:14:33.950622 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-ceilometer-compute-config-data-1\") pod \"b7b49501-c951-4829-8791-c27d6e01a606\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " Dec 06 07:14:33 crc kubenswrapper[4823]: I1206 07:14:33.950822 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-ssh-key\") pod \"b7b49501-c951-4829-8791-c27d6e01a606\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " Dec 06 07:14:33 crc kubenswrapper[4823]: I1206 07:14:33.950913 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-inventory\") pod \"b7b49501-c951-4829-8791-c27d6e01a606\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " Dec 06 07:14:33 crc kubenswrapper[4823]: I1206 07:14:33.950984 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwl9h\" (UniqueName: \"kubernetes.io/projected/b7b49501-c951-4829-8791-c27d6e01a606-kube-api-access-rwl9h\") pod \"b7b49501-c951-4829-8791-c27d6e01a606\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " Dec 06 07:14:33 crc kubenswrapper[4823]: I1206 07:14:33.951159 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-telemetry-combined-ca-bundle\") pod \"b7b49501-c951-4829-8791-c27d6e01a606\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " Dec 06 07:14:33 crc kubenswrapper[4823]: I1206 07:14:33.951229 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-ceilometer-compute-config-data-2\") pod \"b7b49501-c951-4829-8791-c27d6e01a606\" (UID: \"b7b49501-c951-4829-8791-c27d6e01a606\") " Dec 06 07:14:33 crc kubenswrapper[4823]: I1206 07:14:33.959763 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "b7b49501-c951-4829-8791-c27d6e01a606" (UID: "b7b49501-c951-4829-8791-c27d6e01a606"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:14:33 crc kubenswrapper[4823]: I1206 07:14:33.963221 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7b49501-c951-4829-8791-c27d6e01a606-kube-api-access-rwl9h" (OuterVolumeSpecName: "kube-api-access-rwl9h") pod "b7b49501-c951-4829-8791-c27d6e01a606" (UID: "b7b49501-c951-4829-8791-c27d6e01a606"). InnerVolumeSpecName "kube-api-access-rwl9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:14:33 crc kubenswrapper[4823]: I1206 07:14:33.983112 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "b7b49501-c951-4829-8791-c27d6e01a606" (UID: "b7b49501-c951-4829-8791-c27d6e01a606"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:14:33 crc kubenswrapper[4823]: I1206 07:14:33.983514 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-inventory" (OuterVolumeSpecName: "inventory") pod "b7b49501-c951-4829-8791-c27d6e01a606" (UID: "b7b49501-c951-4829-8791-c27d6e01a606"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:14:33 crc kubenswrapper[4823]: I1206 07:14:33.988428 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "b7b49501-c951-4829-8791-c27d6e01a606" (UID: "b7b49501-c951-4829-8791-c27d6e01a606"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:14:33 crc kubenswrapper[4823]: I1206 07:14:33.994948 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "b7b49501-c951-4829-8791-c27d6e01a606" (UID: "b7b49501-c951-4829-8791-c27d6e01a606"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:14:33 crc kubenswrapper[4823]: I1206 07:14:33.998475 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b7b49501-c951-4829-8791-c27d6e01a606" (UID: "b7b49501-c951-4829-8791-c27d6e01a606"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:14:34 crc kubenswrapper[4823]: I1206 07:14:34.054527 4823 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 06 07:14:34 crc kubenswrapper[4823]: I1206 07:14:34.054564 4823 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 06 07:14:34 crc kubenswrapper[4823]: I1206 07:14:34.054576 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 07:14:34 crc kubenswrapper[4823]: I1206 07:14:34.054588 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 07:14:34 crc kubenswrapper[4823]: I1206 07:14:34.054600 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwl9h\" (UniqueName: \"kubernetes.io/projected/b7b49501-c951-4829-8791-c27d6e01a606-kube-api-access-rwl9h\") on node \"crc\" DevicePath \"\"" Dec 06 07:14:34 crc kubenswrapper[4823]: I1206 07:14:34.054611 4823 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:14:34 crc kubenswrapper[4823]: I1206 07:14:34.054619 4823 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b7b49501-c951-4829-8791-c27d6e01a606-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 06 07:14:34 crc kubenswrapper[4823]: I1206 07:14:34.418739 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" event={"ID":"b7b49501-c951-4829-8791-c27d6e01a606","Type":"ContainerDied","Data":"37e27a67a8ce1b2b63caee9850b8f46ae5fe89c05cb31c43753dd63d47936c44"} Dec 06 07:14:34 crc kubenswrapper[4823]: I1206 07:14:34.418789 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37e27a67a8ce1b2b63caee9850b8f46ae5fe89c05cb31c43753dd63d47936c44" Dec 06 07:14:34 crc kubenswrapper[4823]: I1206 07:14:34.418897 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs" Dec 06 07:14:36 crc kubenswrapper[4823]: I1206 07:14:36.051770 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:14:36 crc kubenswrapper[4823]: I1206 07:14:36.052102 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:14:36 crc kubenswrapper[4823]: I1206 07:14:36.052147 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 07:14:36 crc kubenswrapper[4823]: I1206 07:14:36.052948 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96"} pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:14:36 crc kubenswrapper[4823]: I1206 07:14:36.053009 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" containerID="cri-o://0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96" gracePeriod=600 Dec 06 07:14:36 crc kubenswrapper[4823]: E1206 07:14:36.187685 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:14:36 crc kubenswrapper[4823]: I1206 07:14:36.438410 4823 generic.go:334] "Generic (PLEG): container finished" podID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerID="0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96" exitCode=0 Dec 06 07:14:36 crc kubenswrapper[4823]: I1206 07:14:36.438462 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerDied","Data":"0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96"} Dec 06 07:14:36 crc kubenswrapper[4823]: I1206 07:14:36.438505 4823 scope.go:117] "RemoveContainer" containerID="11d8ef1b4ff1a1c78e63818e4e070da86c8a1b557be15aa5354f8e9ec01a1273" Dec 06 07:14:36 crc kubenswrapper[4823]: I1206 07:14:36.439250 4823 scope.go:117] "RemoveContainer" containerID="0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96" Dec 06 07:14:36 crc kubenswrapper[4823]: E1206 07:14:36.439566 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:14:51 crc kubenswrapper[4823]: I1206 07:14:51.141307 4823 scope.go:117] "RemoveContainer" containerID="0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96" Dec 06 07:14:51 crc kubenswrapper[4823]: E1206 07:14:51.143028 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:15:00 crc kubenswrapper[4823]: I1206 07:15:00.165786 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416755-r6fk7"] Dec 06 07:15:00 crc kubenswrapper[4823]: E1206 07:15:00.166894 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f14f7e30-f8ce-4679-9959-0646b72311e6" containerName="registry-server" Dec 06 07:15:00 crc kubenswrapper[4823]: I1206 07:15:00.166912 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f14f7e30-f8ce-4679-9959-0646b72311e6" containerName="registry-server" Dec 06 07:15:00 crc kubenswrapper[4823]: E1206 07:15:00.166931 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5965557f-7d19-41bb-880f-204318350852" containerName="extract-utilities" Dec 06 07:15:00 crc kubenswrapper[4823]: I1206 07:15:00.166940 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="5965557f-7d19-41bb-880f-204318350852" containerName="extract-utilities" Dec 06 07:15:00 crc kubenswrapper[4823]: E1206 07:15:00.166951 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f14f7e30-f8ce-4679-9959-0646b72311e6" containerName="extract-content" Dec 06 07:15:00 crc kubenswrapper[4823]: I1206 07:15:00.166959 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f14f7e30-f8ce-4679-9959-0646b72311e6" containerName="extract-content" Dec 06 07:15:00 crc kubenswrapper[4823]: E1206 07:15:00.166979 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f14f7e30-f8ce-4679-9959-0646b72311e6" containerName="extract-utilities" Dec 06 07:15:00 crc kubenswrapper[4823]: I1206 07:15:00.166987 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f14f7e30-f8ce-4679-9959-0646b72311e6" containerName="extract-utilities" Dec 06 07:15:00 crc kubenswrapper[4823]: E1206 07:15:00.167004 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5965557f-7d19-41bb-880f-204318350852" containerName="extract-content" Dec 06 07:15:00 crc kubenswrapper[4823]: I1206 07:15:00.167012 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="5965557f-7d19-41bb-880f-204318350852" containerName="extract-content" Dec 06 07:15:00 crc kubenswrapper[4823]: E1206 07:15:00.167025 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5965557f-7d19-41bb-880f-204318350852" containerName="registry-server" Dec 06 07:15:00 crc kubenswrapper[4823]: I1206 07:15:00.167033 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="5965557f-7d19-41bb-880f-204318350852" containerName="registry-server" Dec 06 07:15:00 crc kubenswrapper[4823]: E1206 07:15:00.167062 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7b49501-c951-4829-8791-c27d6e01a606" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 06 07:15:00 crc kubenswrapper[4823]: I1206 07:15:00.167075 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b49501-c951-4829-8791-c27d6e01a606" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 06 07:15:00 crc kubenswrapper[4823]: I1206 07:15:00.167359 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f14f7e30-f8ce-4679-9959-0646b72311e6" containerName="registry-server" Dec 06 07:15:00 crc kubenswrapper[4823]: I1206 07:15:00.167379 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7b49501-c951-4829-8791-c27d6e01a606" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 06 07:15:00 crc kubenswrapper[4823]: I1206 07:15:00.167398 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="5965557f-7d19-41bb-880f-204318350852" containerName="registry-server" Dec 06 07:15:00 crc kubenswrapper[4823]: I1206 07:15:00.168571 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-r6fk7" Dec 06 07:15:00 crc kubenswrapper[4823]: I1206 07:15:00.173883 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 07:15:00 crc kubenswrapper[4823]: I1206 07:15:00.174954 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 07:15:00 crc kubenswrapper[4823]: I1206 07:15:00.182984 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416755-r6fk7"] Dec 06 07:15:00 crc kubenswrapper[4823]: I1206 07:15:00.273388 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tzr4\" (UniqueName: \"kubernetes.io/projected/7d0238c3-a925-4c87-b5dd-b531b95f6019-kube-api-access-4tzr4\") pod \"collect-profiles-29416755-r6fk7\" (UID: \"7d0238c3-a925-4c87-b5dd-b531b95f6019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-r6fk7" Dec 06 07:15:00 crc kubenswrapper[4823]: I1206 07:15:00.273754 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d0238c3-a925-4c87-b5dd-b531b95f6019-config-volume\") pod \"collect-profiles-29416755-r6fk7\" (UID: \"7d0238c3-a925-4c87-b5dd-b531b95f6019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-r6fk7" Dec 06 07:15:00 crc kubenswrapper[4823]: I1206 07:15:00.273885 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d0238c3-a925-4c87-b5dd-b531b95f6019-secret-volume\") pod \"collect-profiles-29416755-r6fk7\" (UID: \"7d0238c3-a925-4c87-b5dd-b531b95f6019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-r6fk7" Dec 06 07:15:00 crc kubenswrapper[4823]: I1206 07:15:00.377232 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d0238c3-a925-4c87-b5dd-b531b95f6019-config-volume\") pod \"collect-profiles-29416755-r6fk7\" (UID: \"7d0238c3-a925-4c87-b5dd-b531b95f6019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-r6fk7" Dec 06 07:15:00 crc kubenswrapper[4823]: I1206 07:15:00.377345 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d0238c3-a925-4c87-b5dd-b531b95f6019-secret-volume\") pod \"collect-profiles-29416755-r6fk7\" (UID: \"7d0238c3-a925-4c87-b5dd-b531b95f6019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-r6fk7" Dec 06 07:15:00 crc kubenswrapper[4823]: I1206 07:15:00.377537 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tzr4\" (UniqueName: \"kubernetes.io/projected/7d0238c3-a925-4c87-b5dd-b531b95f6019-kube-api-access-4tzr4\") pod \"collect-profiles-29416755-r6fk7\" (UID: \"7d0238c3-a925-4c87-b5dd-b531b95f6019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-r6fk7" Dec 06 07:15:00 crc kubenswrapper[4823]: I1206 07:15:00.379152 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d0238c3-a925-4c87-b5dd-b531b95f6019-config-volume\") pod \"collect-profiles-29416755-r6fk7\" (UID: \"7d0238c3-a925-4c87-b5dd-b531b95f6019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-r6fk7" Dec 06 07:15:00 crc kubenswrapper[4823]: I1206 07:15:00.385360 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d0238c3-a925-4c87-b5dd-b531b95f6019-secret-volume\") pod \"collect-profiles-29416755-r6fk7\" (UID: \"7d0238c3-a925-4c87-b5dd-b531b95f6019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-r6fk7" Dec 06 07:15:00 crc kubenswrapper[4823]: I1206 07:15:00.398599 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tzr4\" (UniqueName: \"kubernetes.io/projected/7d0238c3-a925-4c87-b5dd-b531b95f6019-kube-api-access-4tzr4\") pod \"collect-profiles-29416755-r6fk7\" (UID: \"7d0238c3-a925-4c87-b5dd-b531b95f6019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-r6fk7" Dec 06 07:15:00 crc kubenswrapper[4823]: I1206 07:15:00.491768 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-r6fk7" Dec 06 07:15:01 crc kubenswrapper[4823]: I1206 07:15:01.634159 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416755-r6fk7"] Dec 06 07:15:01 crc kubenswrapper[4823]: I1206 07:15:01.682948 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-r6fk7" event={"ID":"7d0238c3-a925-4c87-b5dd-b531b95f6019","Type":"ContainerStarted","Data":"2a0a94d2b5942eaf79af44d0fa073b52d58c8a71c0623d09e1100f5b260db991"} Dec 06 07:15:02 crc kubenswrapper[4823]: I1206 07:15:02.709931 4823 generic.go:334] "Generic (PLEG): container finished" podID="7d0238c3-a925-4c87-b5dd-b531b95f6019" containerID="f11d87db2457e569934361f5d4f59e2cd613157960609cafe271267c7b6ca762" exitCode=0 Dec 06 07:15:02 crc kubenswrapper[4823]: I1206 07:15:02.710141 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-r6fk7" event={"ID":"7d0238c3-a925-4c87-b5dd-b531b95f6019","Type":"ContainerDied","Data":"f11d87db2457e569934361f5d4f59e2cd613157960609cafe271267c7b6ca762"} Dec 06 07:15:04 crc kubenswrapper[4823]: I1206 07:15:04.080631 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-r6fk7" Dec 06 07:15:04 crc kubenswrapper[4823]: I1206 07:15:04.191986 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d0238c3-a925-4c87-b5dd-b531b95f6019-secret-volume\") pod \"7d0238c3-a925-4c87-b5dd-b531b95f6019\" (UID: \"7d0238c3-a925-4c87-b5dd-b531b95f6019\") " Dec 06 07:15:04 crc kubenswrapper[4823]: I1206 07:15:04.192150 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tzr4\" (UniqueName: \"kubernetes.io/projected/7d0238c3-a925-4c87-b5dd-b531b95f6019-kube-api-access-4tzr4\") pod \"7d0238c3-a925-4c87-b5dd-b531b95f6019\" (UID: \"7d0238c3-a925-4c87-b5dd-b531b95f6019\") " Dec 06 07:15:04 crc kubenswrapper[4823]: I1206 07:15:04.192260 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d0238c3-a925-4c87-b5dd-b531b95f6019-config-volume\") pod \"7d0238c3-a925-4c87-b5dd-b531b95f6019\" (UID: \"7d0238c3-a925-4c87-b5dd-b531b95f6019\") " Dec 06 07:15:04 crc kubenswrapper[4823]: I1206 07:15:04.193554 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d0238c3-a925-4c87-b5dd-b531b95f6019-config-volume" (OuterVolumeSpecName: "config-volume") pod "7d0238c3-a925-4c87-b5dd-b531b95f6019" (UID: "7d0238c3-a925-4c87-b5dd-b531b95f6019"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:15:04 crc kubenswrapper[4823]: I1206 07:15:04.199812 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d0238c3-a925-4c87-b5dd-b531b95f6019-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7d0238c3-a925-4c87-b5dd-b531b95f6019" (UID: "7d0238c3-a925-4c87-b5dd-b531b95f6019"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:15:04 crc kubenswrapper[4823]: I1206 07:15:04.200947 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d0238c3-a925-4c87-b5dd-b531b95f6019-kube-api-access-4tzr4" (OuterVolumeSpecName: "kube-api-access-4tzr4") pod "7d0238c3-a925-4c87-b5dd-b531b95f6019" (UID: "7d0238c3-a925-4c87-b5dd-b531b95f6019"). InnerVolumeSpecName "kube-api-access-4tzr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:15:04 crc kubenswrapper[4823]: I1206 07:15:04.294403 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d0238c3-a925-4c87-b5dd-b531b95f6019-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 07:15:04 crc kubenswrapper[4823]: I1206 07:15:04.294453 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tzr4\" (UniqueName: \"kubernetes.io/projected/7d0238c3-a925-4c87-b5dd-b531b95f6019-kube-api-access-4tzr4\") on node \"crc\" DevicePath \"\"" Dec 06 07:15:04 crc kubenswrapper[4823]: I1206 07:15:04.294466 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d0238c3-a925-4c87-b5dd-b531b95f6019-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 07:15:04 crc kubenswrapper[4823]: I1206 07:15:04.731997 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-r6fk7" event={"ID":"7d0238c3-a925-4c87-b5dd-b531b95f6019","Type":"ContainerDied","Data":"2a0a94d2b5942eaf79af44d0fa073b52d58c8a71c0623d09e1100f5b260db991"} Dec 06 07:15:04 crc kubenswrapper[4823]: I1206 07:15:04.732047 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a0a94d2b5942eaf79af44d0fa073b52d58c8a71c0623d09e1100f5b260db991" Dec 06 07:15:04 crc kubenswrapper[4823]: I1206 07:15:04.732115 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-r6fk7" Dec 06 07:15:05 crc kubenswrapper[4823]: I1206 07:15:05.141806 4823 scope.go:117] "RemoveContainer" containerID="0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96" Dec 06 07:15:05 crc kubenswrapper[4823]: E1206 07:15:05.142145 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:15:05 crc kubenswrapper[4823]: I1206 07:15:05.500062 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416710-2mtlh"] Dec 06 07:15:05 crc kubenswrapper[4823]: I1206 07:15:05.514229 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416710-2mtlh"] Dec 06 07:15:07 crc kubenswrapper[4823]: I1206 07:15:07.155855 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e75b29f-2105-4ab0-9bc8-102729c188d2" path="/var/lib/kubelet/pods/7e75b29f-2105-4ab0-9bc8-102729c188d2/volumes" Dec 06 07:15:12 crc kubenswrapper[4823]: I1206 07:15:12.907849 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Dec 06 07:15:12 crc kubenswrapper[4823]: E1206 07:15:12.908993 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d0238c3-a925-4c87-b5dd-b531b95f6019" containerName="collect-profiles" Dec 06 07:15:12 crc kubenswrapper[4823]: I1206 07:15:12.909014 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d0238c3-a925-4c87-b5dd-b531b95f6019" containerName="collect-profiles" Dec 06 07:15:12 crc kubenswrapper[4823]: I1206 07:15:12.909303 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d0238c3-a925-4c87-b5dd-b531b95f6019" containerName="collect-profiles" Dec 06 07:15:12 crc kubenswrapper[4823]: I1206 07:15:12.939467 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Dec 06 07:15:12 crc kubenswrapper[4823]: I1206 07:15:12.942500 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Dec 06 07:15:12 crc kubenswrapper[4823]: I1206 07:15:12.947032 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.063076 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-0"] Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.065314 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.068880 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-config-data" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.077318 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.135404 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a942075-9497-4ebd-958d-14ea50b6558a-config-data-custom\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.135512 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-run\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.135625 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a942075-9497-4ebd-958d-14ea50b6558a-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.135767 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-dev\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.135813 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.135857 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.135884 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-etc-nvme\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.135942 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.135976 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a942075-9497-4ebd-958d-14ea50b6558a-config-data\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.136012 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.136041 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.136119 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.136156 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a942075-9497-4ebd-958d-14ea50b6558a-scripts\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.136188 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.136209 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-dev\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.136263 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.147382 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.153049 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.153263 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-sys\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.153289 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-run\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.153316 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbtv7\" (UniqueName: \"kubernetes.io/projected/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-kube-api-access-wbtv7\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.153339 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.153368 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.153408 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.153437 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.153454 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.153478 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-lib-modules\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.153494 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.153514 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29zlq\" (UniqueName: \"kubernetes.io/projected/5a942075-9497-4ebd-958d-14ea50b6558a-kube-api-access-29zlq\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.153537 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-sys\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.200904 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.202767 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.216947 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-2-config-data" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.251432 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.255967 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a942075-9497-4ebd-958d-14ea50b6558a-config-data\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256020 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256045 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256060 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256078 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256106 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256125 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256142 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a27453d1-4b9a-481c-8145-5a31f7876f97-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256160 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a942075-9497-4ebd-958d-14ea50b6558a-scripts\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256181 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-dev\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256199 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256213 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256230 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256252 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256271 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256289 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256308 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256346 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a27453d1-4b9a-481c-8145-5a31f7876f97-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256371 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-sys\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256385 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-run\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256405 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbtv7\" (UniqueName: \"kubernetes.io/projected/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-kube-api-access-wbtv7\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256424 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256449 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256469 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256488 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256565 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256592 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256613 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-lib-modules\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256627 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256642 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29zlq\" (UniqueName: \"kubernetes.io/projected/5a942075-9497-4ebd-958d-14ea50b6558a-kube-api-access-29zlq\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256670 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-sys\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256706 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a942075-9497-4ebd-958d-14ea50b6558a-config-data-custom\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256725 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a27453d1-4b9a-481c-8145-5a31f7876f97-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256741 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-run\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256774 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256792 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a942075-9497-4ebd-958d-14ea50b6558a-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256851 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhhhq\" (UniqueName: \"kubernetes.io/projected/a27453d1-4b9a-481c-8145-5a31f7876f97-kube-api-access-jhhhq\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256883 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-dev\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256918 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256948 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256975 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.256998 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-etc-nvme\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.257029 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.257048 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a27453d1-4b9a-481c-8145-5a31f7876f97-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.257068 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.257152 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.257336 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.258082 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.258237 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-sys\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.258266 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-run\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.258561 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.258796 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-run\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.258985 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.260802 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.262920 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.270600 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.270689 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.270756 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.271201 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-etc-nvme\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.271247 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-lib-modules\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.271272 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-dev\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.271296 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-dev\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.272453 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.289086 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.289184 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5a942075-9497-4ebd-958d-14ea50b6558a-sys\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.295346 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.302319 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.305724 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29zlq\" (UniqueName: \"kubernetes.io/projected/5a942075-9497-4ebd-958d-14ea50b6558a-kube-api-access-29zlq\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.306144 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a942075-9497-4ebd-958d-14ea50b6558a-config-data\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.309974 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbtv7\" (UniqueName: \"kubernetes.io/projected/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-kube-api-access-wbtv7\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.316712 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.322374 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a942075-9497-4ebd-958d-14ea50b6558a-config-data-custom\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.334560 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a942075-9497-4ebd-958d-14ea50b6558a-scripts\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.341592 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a942075-9497-4ebd-958d-14ea50b6558a-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"5a942075-9497-4ebd-958d-14ea50b6558a\") " pod="openstack/cinder-backup-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.365393 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c006d7d9-a998-4d47-97e6-5f81a6c75c0e-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"c006d7d9-a998-4d47-97e6-5f81a6c75c0e\") " pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.366367 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a27453d1-4b9a-481c-8145-5a31f7876f97-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.366485 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.366563 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a27453d1-4b9a-481c-8145-5a31f7876f97-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.366604 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.366649 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhhhq\" (UniqueName: \"kubernetes.io/projected/a27453d1-4b9a-481c-8145-5a31f7876f97-kube-api-access-jhhhq\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.366705 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.366740 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.366762 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a27453d1-4b9a-481c-8145-5a31f7876f97-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.366782 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.366801 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.366834 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.366859 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a27453d1-4b9a-481c-8145-5a31f7876f97-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.366895 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.366922 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.366958 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.367072 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.367814 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.367860 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.368815 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.368881 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.368905 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.368928 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.369783 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.369907 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.370258 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a27453d1-4b9a-481c-8145-5a31f7876f97-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.372714 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a27453d1-4b9a-481c-8145-5a31f7876f97-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.376088 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a27453d1-4b9a-481c-8145-5a31f7876f97-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.379308 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a27453d1-4b9a-481c-8145-5a31f7876f97-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.379906 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a27453d1-4b9a-481c-8145-5a31f7876f97-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.393274 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.426436 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhhhq\" (UniqueName: \"kubernetes.io/projected/a27453d1-4b9a-481c-8145-5a31f7876f97-kube-api-access-jhhhq\") pod \"cinder-volume-nfs-2-0\" (UID: \"a27453d1-4b9a-481c-8145-5a31f7876f97\") " pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.535683 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:13 crc kubenswrapper[4823]: I1206 07:15:13.569205 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Dec 06 07:15:14 crc kubenswrapper[4823]: I1206 07:15:14.135194 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Dec 06 07:15:14 crc kubenswrapper[4823]: I1206 07:15:14.136490 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 07:15:14 crc kubenswrapper[4823]: I1206 07:15:14.398572 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Dec 06 07:15:14 crc kubenswrapper[4823]: I1206 07:15:14.666967 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Dec 06 07:15:14 crc kubenswrapper[4823]: W1206 07:15:14.698973 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc006d7d9_a998_4d47_97e6_5f81a6c75c0e.slice/crio-07b371bdd77f286e70dc4408f39bf93b04a40c1becd5d5288bde77678b0a28ac WatchSource:0}: Error finding container 07b371bdd77f286e70dc4408f39bf93b04a40c1becd5d5288bde77678b0a28ac: Status 404 returned error can't find the container with id 07b371bdd77f286e70dc4408f39bf93b04a40c1becd5d5288bde77678b0a28ac Dec 06 07:15:14 crc kubenswrapper[4823]: I1206 07:15:14.869490 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"c006d7d9-a998-4d47-97e6-5f81a6c75c0e","Type":"ContainerStarted","Data":"07b371bdd77f286e70dc4408f39bf93b04a40c1becd5d5288bde77678b0a28ac"} Dec 06 07:15:14 crc kubenswrapper[4823]: I1206 07:15:14.878678 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"5a942075-9497-4ebd-958d-14ea50b6558a","Type":"ContainerStarted","Data":"e04ae7f27e932e80942d4bd7abde85580953a8206026172deecf766fb621c1e7"} Dec 06 07:15:14 crc kubenswrapper[4823]: I1206 07:15:14.881549 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"a27453d1-4b9a-481c-8145-5a31f7876f97","Type":"ContainerStarted","Data":"6623f772b43d14ba26216ad7ef553685d8acf1fe025c1cecf013772908798c42"} Dec 06 07:15:15 crc kubenswrapper[4823]: I1206 07:15:15.914261 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"c006d7d9-a998-4d47-97e6-5f81a6c75c0e","Type":"ContainerStarted","Data":"892bf06c7177c57057af708b9d67fec1c9e517bbaada579b968ea52e0cacd7f0"} Dec 06 07:15:15 crc kubenswrapper[4823]: I1206 07:15:15.915126 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"c006d7d9-a998-4d47-97e6-5f81a6c75c0e","Type":"ContainerStarted","Data":"c5eb8ed36cf5f37d0bd09b4823b40547c2c0f74e417a409be225ff3fff8e078a"} Dec 06 07:15:15 crc kubenswrapper[4823]: I1206 07:15:15.923082 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"5a942075-9497-4ebd-958d-14ea50b6558a","Type":"ContainerStarted","Data":"70a479a113792ceefac63a32b0680f98dc91a19ac8e480c338b582ed3cff7c19"} Dec 06 07:15:15 crc kubenswrapper[4823]: I1206 07:15:15.923137 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"5a942075-9497-4ebd-958d-14ea50b6558a","Type":"ContainerStarted","Data":"b58a71e817a5f92a69053f9c45a4deccbd33e14a4a0e5745e9a2a5b0dfc355c6"} Dec 06 07:15:15 crc kubenswrapper[4823]: I1206 07:15:15.926564 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"a27453d1-4b9a-481c-8145-5a31f7876f97","Type":"ContainerStarted","Data":"e0e644310fcf24f60e76eed4c92a3f1d4bd3da47df45420b1b7e7d50427a1382"} Dec 06 07:15:15 crc kubenswrapper[4823]: I1206 07:15:15.926625 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"a27453d1-4b9a-481c-8145-5a31f7876f97","Type":"ContainerStarted","Data":"e6f389f34d8d6a2df7b36fe5b833c39a317095e18626123bc077cae289d37097"} Dec 06 07:15:15 crc kubenswrapper[4823]: I1206 07:15:15.960853 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-0" podStartSLOduration=2.878058791 podStartE2EDuration="2.960829811s" podCreationTimestamp="2025-12-06 07:15:13 +0000 UTC" firstStartedPulling="2025-12-06 07:15:14.701250075 +0000 UTC m=+3015.987002045" lastFinishedPulling="2025-12-06 07:15:14.784021105 +0000 UTC m=+3016.069773065" observedRunningTime="2025-12-06 07:15:15.950034397 +0000 UTC m=+3017.235786357" watchObservedRunningTime="2025-12-06 07:15:15.960829811 +0000 UTC m=+3017.246581771" Dec 06 07:15:15 crc kubenswrapper[4823]: I1206 07:15:15.997484 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.619212185 podStartE2EDuration="3.997458728s" podCreationTimestamp="2025-12-06 07:15:12 +0000 UTC" firstStartedPulling="2025-12-06 07:15:14.135959776 +0000 UTC m=+3015.421711736" lastFinishedPulling="2025-12-06 07:15:14.514206319 +0000 UTC m=+3015.799958279" observedRunningTime="2025-12-06 07:15:15.991478824 +0000 UTC m=+3017.277230794" watchObservedRunningTime="2025-12-06 07:15:15.997458728 +0000 UTC m=+3017.283210688" Dec 06 07:15:16 crc kubenswrapper[4823]: I1206 07:15:16.023266 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-2-0" podStartSLOduration=2.7067644939999997 podStartE2EDuration="3.023245759s" podCreationTimestamp="2025-12-06 07:15:13 +0000 UTC" firstStartedPulling="2025-12-06 07:15:14.436046573 +0000 UTC m=+3015.721798533" lastFinishedPulling="2025-12-06 07:15:14.752527838 +0000 UTC m=+3016.038279798" observedRunningTime="2025-12-06 07:15:16.011510947 +0000 UTC m=+3017.297262917" watchObservedRunningTime="2025-12-06 07:15:16.023245759 +0000 UTC m=+3017.308997719" Dec 06 07:15:18 crc kubenswrapper[4823]: I1206 07:15:18.394157 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:18 crc kubenswrapper[4823]: I1206 07:15:18.536485 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:18 crc kubenswrapper[4823]: I1206 07:15:18.569340 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Dec 06 07:15:19 crc kubenswrapper[4823]: I1206 07:15:19.149130 4823 scope.go:117] "RemoveContainer" containerID="0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96" Dec 06 07:15:19 crc kubenswrapper[4823]: E1206 07:15:19.149911 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:15:23 crc kubenswrapper[4823]: I1206 07:15:23.568081 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-0" Dec 06 07:15:23 crc kubenswrapper[4823]: I1206 07:15:23.785173 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-2-0" Dec 06 07:15:23 crc kubenswrapper[4823]: I1206 07:15:23.856633 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Dec 06 07:15:32 crc kubenswrapper[4823]: I1206 07:15:32.140507 4823 scope.go:117] "RemoveContainer" containerID="0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96" Dec 06 07:15:32 crc kubenswrapper[4823]: E1206 07:15:32.141219 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:15:44 crc kubenswrapper[4823]: I1206 07:15:44.064385 4823 scope.go:117] "RemoveContainer" containerID="e8fc561ffe45e69fcb164c671db1dc35b0a28e2b3946e242c6bd64754b0d9d74" Dec 06 07:15:47 crc kubenswrapper[4823]: I1206 07:15:47.141257 4823 scope.go:117] "RemoveContainer" containerID="0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96" Dec 06 07:15:47 crc kubenswrapper[4823]: E1206 07:15:47.141875 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:16:02 crc kubenswrapper[4823]: I1206 07:16:02.141409 4823 scope.go:117] "RemoveContainer" containerID="0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96" Dec 06 07:16:02 crc kubenswrapper[4823]: E1206 07:16:02.142287 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:16:16 crc kubenswrapper[4823]: I1206 07:16:16.140924 4823 scope.go:117] "RemoveContainer" containerID="0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96" Dec 06 07:16:16 crc kubenswrapper[4823]: E1206 07:16:16.141631 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:16:19 crc kubenswrapper[4823]: I1206 07:16:19.861792 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 07:16:19 crc kubenswrapper[4823]: I1206 07:16:19.862552 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d605a044-9bcd-4e5f-a44f-71cf32706e46" containerName="prometheus" containerID="cri-o://4c36b34b4537cc4dcb71d9581dcad3244b7b0a4dea851026a6098f1ade8f2f2b" gracePeriod=600 Dec 06 07:16:19 crc kubenswrapper[4823]: I1206 07:16:19.862621 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d605a044-9bcd-4e5f-a44f-71cf32706e46" containerName="config-reloader" containerID="cri-o://751aa7d07b9859478e77c21740a14ea68c0d8e0c9f752c81222f44b2de1806c2" gracePeriod=600 Dec 06 07:16:19 crc kubenswrapper[4823]: I1206 07:16:19.862688 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d605a044-9bcd-4e5f-a44f-71cf32706e46" containerName="thanos-sidecar" containerID="cri-o://310adfc8df74c4df729f5df0c50271f4a811ef780a92de7e8a08fb38148f185c" gracePeriod=600 Dec 06 07:16:20 crc kubenswrapper[4823]: E1206 07:16:20.369957 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd605a044_9bcd_4e5f_a44f_71cf32706e46.slice/crio-conmon-4c36b34b4537cc4dcb71d9581dcad3244b7b0a4dea851026a6098f1ade8f2f2b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd605a044_9bcd_4e5f_a44f_71cf32706e46.slice/crio-751aa7d07b9859478e77c21740a14ea68c0d8e0c9f752c81222f44b2de1806c2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd605a044_9bcd_4e5f_a44f_71cf32706e46.slice/crio-4c36b34b4537cc4dcb71d9581dcad3244b7b0a4dea851026a6098f1ade8f2f2b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd605a044_9bcd_4e5f_a44f_71cf32706e46.slice/crio-conmon-310adfc8df74c4df729f5df0c50271f4a811ef780a92de7e8a08fb38148f185c.scope\": RecentStats: unable to find data in memory cache]" Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.845098 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.932516 4823 generic.go:334] "Generic (PLEG): container finished" podID="d605a044-9bcd-4e5f-a44f-71cf32706e46" containerID="310adfc8df74c4df729f5df0c50271f4a811ef780a92de7e8a08fb38148f185c" exitCode=0 Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.932556 4823 generic.go:334] "Generic (PLEG): container finished" podID="d605a044-9bcd-4e5f-a44f-71cf32706e46" containerID="751aa7d07b9859478e77c21740a14ea68c0d8e0c9f752c81222f44b2de1806c2" exitCode=0 Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.932568 4823 generic.go:334] "Generic (PLEG): container finished" podID="d605a044-9bcd-4e5f-a44f-71cf32706e46" containerID="4c36b34b4537cc4dcb71d9581dcad3244b7b0a4dea851026a6098f1ade8f2f2b" exitCode=0 Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.932584 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d605a044-9bcd-4e5f-a44f-71cf32706e46","Type":"ContainerDied","Data":"310adfc8df74c4df729f5df0c50271f4a811ef780a92de7e8a08fb38148f185c"} Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.932657 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d605a044-9bcd-4e5f-a44f-71cf32706e46","Type":"ContainerDied","Data":"751aa7d07b9859478e77c21740a14ea68c0d8e0c9f752c81222f44b2de1806c2"} Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.932704 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d605a044-9bcd-4e5f-a44f-71cf32706e46","Type":"ContainerDied","Data":"4c36b34b4537cc4dcb71d9581dcad3244b7b0a4dea851026a6098f1ade8f2f2b"} Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.932725 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d605a044-9bcd-4e5f-a44f-71cf32706e46","Type":"ContainerDied","Data":"7ea7b06b4a3ec3689dbd8ad582634df7765f9f13a2155ba8aa3a07eedf484685"} Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.932753 4823 scope.go:117] "RemoveContainer" containerID="310adfc8df74c4df729f5df0c50271f4a811ef780a92de7e8a08fb38148f185c" Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.934846 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.939278 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\") pod \"d605a044-9bcd-4e5f-a44f-71cf32706e46\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.939430 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d605a044-9bcd-4e5f-a44f-71cf32706e46-tls-assets\") pod \"d605a044-9bcd-4e5f-a44f-71cf32706e46\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.939478 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d605a044-9bcd-4e5f-a44f-71cf32706e46-prometheus-metric-storage-rulefiles-0\") pod \"d605a044-9bcd-4e5f-a44f-71cf32706e46\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.939573 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-thanos-prometheus-http-client-file\") pod \"d605a044-9bcd-4e5f-a44f-71cf32706e46\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.939637 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"d605a044-9bcd-4e5f-a44f-71cf32706e46\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.939748 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n6t4\" (UniqueName: \"kubernetes.io/projected/d605a044-9bcd-4e5f-a44f-71cf32706e46-kube-api-access-6n6t4\") pod \"d605a044-9bcd-4e5f-a44f-71cf32706e46\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.939787 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d605a044-9bcd-4e5f-a44f-71cf32706e46-config-out\") pod \"d605a044-9bcd-4e5f-a44f-71cf32706e46\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.939836 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-web-config\") pod \"d605a044-9bcd-4e5f-a44f-71cf32706e46\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.939863 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"d605a044-9bcd-4e5f-a44f-71cf32706e46\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.939897 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-config\") pod \"d605a044-9bcd-4e5f-a44f-71cf32706e46\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.939952 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-secret-combined-ca-bundle\") pod \"d605a044-9bcd-4e5f-a44f-71cf32706e46\" (UID: \"d605a044-9bcd-4e5f-a44f-71cf32706e46\") " Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.941769 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d605a044-9bcd-4e5f-a44f-71cf32706e46-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "d605a044-9bcd-4e5f-a44f-71cf32706e46" (UID: "d605a044-9bcd-4e5f-a44f-71cf32706e46"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.946202 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d605a044-9bcd-4e5f-a44f-71cf32706e46-config-out" (OuterVolumeSpecName: "config-out") pod "d605a044-9bcd-4e5f-a44f-71cf32706e46" (UID: "d605a044-9bcd-4e5f-a44f-71cf32706e46"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.947817 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d605a044-9bcd-4e5f-a44f-71cf32706e46-kube-api-access-6n6t4" (OuterVolumeSpecName: "kube-api-access-6n6t4") pod "d605a044-9bcd-4e5f-a44f-71cf32706e46" (UID: "d605a044-9bcd-4e5f-a44f-71cf32706e46"). InnerVolumeSpecName "kube-api-access-6n6t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.948269 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-config" (OuterVolumeSpecName: "config") pod "d605a044-9bcd-4e5f-a44f-71cf32706e46" (UID: "d605a044-9bcd-4e5f-a44f-71cf32706e46"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.948381 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d605a044-9bcd-4e5f-a44f-71cf32706e46-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "d605a044-9bcd-4e5f-a44f-71cf32706e46" (UID: "d605a044-9bcd-4e5f-a44f-71cf32706e46"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.954656 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "d605a044-9bcd-4e5f-a44f-71cf32706e46" (UID: "d605a044-9bcd-4e5f-a44f-71cf32706e46"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.955250 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "d605a044-9bcd-4e5f-a44f-71cf32706e46" (UID: "d605a044-9bcd-4e5f-a44f-71cf32706e46"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.955685 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "d605a044-9bcd-4e5f-a44f-71cf32706e46" (UID: "d605a044-9bcd-4e5f-a44f-71cf32706e46"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.958406 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "d605a044-9bcd-4e5f-a44f-71cf32706e46" (UID: "d605a044-9bcd-4e5f-a44f-71cf32706e46"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.969139 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "d605a044-9bcd-4e5f-a44f-71cf32706e46" (UID: "d605a044-9bcd-4e5f-a44f-71cf32706e46"). InnerVolumeSpecName "pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 06 07:16:20 crc kubenswrapper[4823]: I1206 07:16:20.992101 4823 scope.go:117] "RemoveContainer" containerID="751aa7d07b9859478e77c21740a14ea68c0d8e0c9f752c81222f44b2de1806c2" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.042826 4823 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.042871 4823 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.042887 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6n6t4\" (UniqueName: \"kubernetes.io/projected/d605a044-9bcd-4e5f-a44f-71cf32706e46-kube-api-access-6n6t4\") on node \"crc\" DevicePath \"\"" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.042901 4823 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d605a044-9bcd-4e5f-a44f-71cf32706e46-config-out\") on node \"crc\" DevicePath \"\"" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.042912 4823 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.042932 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.042945 4823 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.042976 4823 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\") on node \"crc\" " Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.042988 4823 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d605a044-9bcd-4e5f-a44f-71cf32706e46-tls-assets\") on node \"crc\" DevicePath \"\"" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.043000 4823 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d605a044-9bcd-4e5f-a44f-71cf32706e46-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.095803 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-web-config" (OuterVolumeSpecName: "web-config") pod "d605a044-9bcd-4e5f-a44f-71cf32706e46" (UID: "d605a044-9bcd-4e5f-a44f-71cf32706e46"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.104274 4823 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.104434 4823 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8") on node "crc" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.144509 4823 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d605a044-9bcd-4e5f-a44f-71cf32706e46-web-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.145525 4823 reconciler_common.go:293] "Volume detached for volume \"pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\") on node \"crc\" DevicePath \"\"" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.159617 4823 scope.go:117] "RemoveContainer" containerID="4c36b34b4537cc4dcb71d9581dcad3244b7b0a4dea851026a6098f1ade8f2f2b" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.181173 4823 scope.go:117] "RemoveContainer" containerID="d0d20704dc8ce42702280f5073226c84b5f1ae3912b827653430f48763c24237" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.212086 4823 scope.go:117] "RemoveContainer" containerID="310adfc8df74c4df729f5df0c50271f4a811ef780a92de7e8a08fb38148f185c" Dec 06 07:16:21 crc kubenswrapper[4823]: E1206 07:16:21.212686 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"310adfc8df74c4df729f5df0c50271f4a811ef780a92de7e8a08fb38148f185c\": container with ID starting with 310adfc8df74c4df729f5df0c50271f4a811ef780a92de7e8a08fb38148f185c not found: ID does not exist" containerID="310adfc8df74c4df729f5df0c50271f4a811ef780a92de7e8a08fb38148f185c" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.212747 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"310adfc8df74c4df729f5df0c50271f4a811ef780a92de7e8a08fb38148f185c"} err="failed to get container status \"310adfc8df74c4df729f5df0c50271f4a811ef780a92de7e8a08fb38148f185c\": rpc error: code = NotFound desc = could not find container \"310adfc8df74c4df729f5df0c50271f4a811ef780a92de7e8a08fb38148f185c\": container with ID starting with 310adfc8df74c4df729f5df0c50271f4a811ef780a92de7e8a08fb38148f185c not found: ID does not exist" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.212782 4823 scope.go:117] "RemoveContainer" containerID="751aa7d07b9859478e77c21740a14ea68c0d8e0c9f752c81222f44b2de1806c2" Dec 06 07:16:21 crc kubenswrapper[4823]: E1206 07:16:21.213074 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"751aa7d07b9859478e77c21740a14ea68c0d8e0c9f752c81222f44b2de1806c2\": container with ID starting with 751aa7d07b9859478e77c21740a14ea68c0d8e0c9f752c81222f44b2de1806c2 not found: ID does not exist" containerID="751aa7d07b9859478e77c21740a14ea68c0d8e0c9f752c81222f44b2de1806c2" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.213109 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"751aa7d07b9859478e77c21740a14ea68c0d8e0c9f752c81222f44b2de1806c2"} err="failed to get container status \"751aa7d07b9859478e77c21740a14ea68c0d8e0c9f752c81222f44b2de1806c2\": rpc error: code = NotFound desc = could not find container \"751aa7d07b9859478e77c21740a14ea68c0d8e0c9f752c81222f44b2de1806c2\": container with ID starting with 751aa7d07b9859478e77c21740a14ea68c0d8e0c9f752c81222f44b2de1806c2 not found: ID does not exist" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.213135 4823 scope.go:117] "RemoveContainer" containerID="4c36b34b4537cc4dcb71d9581dcad3244b7b0a4dea851026a6098f1ade8f2f2b" Dec 06 07:16:21 crc kubenswrapper[4823]: E1206 07:16:21.213327 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c36b34b4537cc4dcb71d9581dcad3244b7b0a4dea851026a6098f1ade8f2f2b\": container with ID starting with 4c36b34b4537cc4dcb71d9581dcad3244b7b0a4dea851026a6098f1ade8f2f2b not found: ID does not exist" containerID="4c36b34b4537cc4dcb71d9581dcad3244b7b0a4dea851026a6098f1ade8f2f2b" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.213351 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c36b34b4537cc4dcb71d9581dcad3244b7b0a4dea851026a6098f1ade8f2f2b"} err="failed to get container status \"4c36b34b4537cc4dcb71d9581dcad3244b7b0a4dea851026a6098f1ade8f2f2b\": rpc error: code = NotFound desc = could not find container \"4c36b34b4537cc4dcb71d9581dcad3244b7b0a4dea851026a6098f1ade8f2f2b\": container with ID starting with 4c36b34b4537cc4dcb71d9581dcad3244b7b0a4dea851026a6098f1ade8f2f2b not found: ID does not exist" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.213364 4823 scope.go:117] "RemoveContainer" containerID="d0d20704dc8ce42702280f5073226c84b5f1ae3912b827653430f48763c24237" Dec 06 07:16:21 crc kubenswrapper[4823]: E1206 07:16:21.213553 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0d20704dc8ce42702280f5073226c84b5f1ae3912b827653430f48763c24237\": container with ID starting with d0d20704dc8ce42702280f5073226c84b5f1ae3912b827653430f48763c24237 not found: ID does not exist" containerID="d0d20704dc8ce42702280f5073226c84b5f1ae3912b827653430f48763c24237" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.213569 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0d20704dc8ce42702280f5073226c84b5f1ae3912b827653430f48763c24237"} err="failed to get container status \"d0d20704dc8ce42702280f5073226c84b5f1ae3912b827653430f48763c24237\": rpc error: code = NotFound desc = could not find container \"d0d20704dc8ce42702280f5073226c84b5f1ae3912b827653430f48763c24237\": container with ID starting with d0d20704dc8ce42702280f5073226c84b5f1ae3912b827653430f48763c24237 not found: ID does not exist" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.213583 4823 scope.go:117] "RemoveContainer" containerID="310adfc8df74c4df729f5df0c50271f4a811ef780a92de7e8a08fb38148f185c" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.213755 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"310adfc8df74c4df729f5df0c50271f4a811ef780a92de7e8a08fb38148f185c"} err="failed to get container status \"310adfc8df74c4df729f5df0c50271f4a811ef780a92de7e8a08fb38148f185c\": rpc error: code = NotFound desc = could not find container \"310adfc8df74c4df729f5df0c50271f4a811ef780a92de7e8a08fb38148f185c\": container with ID starting with 310adfc8df74c4df729f5df0c50271f4a811ef780a92de7e8a08fb38148f185c not found: ID does not exist" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.213768 4823 scope.go:117] "RemoveContainer" containerID="751aa7d07b9859478e77c21740a14ea68c0d8e0c9f752c81222f44b2de1806c2" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.213918 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"751aa7d07b9859478e77c21740a14ea68c0d8e0c9f752c81222f44b2de1806c2"} err="failed to get container status \"751aa7d07b9859478e77c21740a14ea68c0d8e0c9f752c81222f44b2de1806c2\": rpc error: code = NotFound desc = could not find container \"751aa7d07b9859478e77c21740a14ea68c0d8e0c9f752c81222f44b2de1806c2\": container with ID starting with 751aa7d07b9859478e77c21740a14ea68c0d8e0c9f752c81222f44b2de1806c2 not found: ID does not exist" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.213937 4823 scope.go:117] "RemoveContainer" containerID="4c36b34b4537cc4dcb71d9581dcad3244b7b0a4dea851026a6098f1ade8f2f2b" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.214142 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c36b34b4537cc4dcb71d9581dcad3244b7b0a4dea851026a6098f1ade8f2f2b"} err="failed to get container status \"4c36b34b4537cc4dcb71d9581dcad3244b7b0a4dea851026a6098f1ade8f2f2b\": rpc error: code = NotFound desc = could not find container \"4c36b34b4537cc4dcb71d9581dcad3244b7b0a4dea851026a6098f1ade8f2f2b\": container with ID starting with 4c36b34b4537cc4dcb71d9581dcad3244b7b0a4dea851026a6098f1ade8f2f2b not found: ID does not exist" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.214177 4823 scope.go:117] "RemoveContainer" containerID="d0d20704dc8ce42702280f5073226c84b5f1ae3912b827653430f48763c24237" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.214395 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0d20704dc8ce42702280f5073226c84b5f1ae3912b827653430f48763c24237"} err="failed to get container status \"d0d20704dc8ce42702280f5073226c84b5f1ae3912b827653430f48763c24237\": rpc error: code = NotFound desc = could not find container \"d0d20704dc8ce42702280f5073226c84b5f1ae3912b827653430f48763c24237\": container with ID starting with d0d20704dc8ce42702280f5073226c84b5f1ae3912b827653430f48763c24237 not found: ID does not exist" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.214420 4823 scope.go:117] "RemoveContainer" containerID="310adfc8df74c4df729f5df0c50271f4a811ef780a92de7e8a08fb38148f185c" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.214830 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"310adfc8df74c4df729f5df0c50271f4a811ef780a92de7e8a08fb38148f185c"} err="failed to get container status \"310adfc8df74c4df729f5df0c50271f4a811ef780a92de7e8a08fb38148f185c\": rpc error: code = NotFound desc = could not find container \"310adfc8df74c4df729f5df0c50271f4a811ef780a92de7e8a08fb38148f185c\": container with ID starting with 310adfc8df74c4df729f5df0c50271f4a811ef780a92de7e8a08fb38148f185c not found: ID does not exist" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.214923 4823 scope.go:117] "RemoveContainer" containerID="751aa7d07b9859478e77c21740a14ea68c0d8e0c9f752c81222f44b2de1806c2" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.215301 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"751aa7d07b9859478e77c21740a14ea68c0d8e0c9f752c81222f44b2de1806c2"} err="failed to get container status \"751aa7d07b9859478e77c21740a14ea68c0d8e0c9f752c81222f44b2de1806c2\": rpc error: code = NotFound desc = could not find container \"751aa7d07b9859478e77c21740a14ea68c0d8e0c9f752c81222f44b2de1806c2\": container with ID starting with 751aa7d07b9859478e77c21740a14ea68c0d8e0c9f752c81222f44b2de1806c2 not found: ID does not exist" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.215325 4823 scope.go:117] "RemoveContainer" containerID="4c36b34b4537cc4dcb71d9581dcad3244b7b0a4dea851026a6098f1ade8f2f2b" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.215558 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c36b34b4537cc4dcb71d9581dcad3244b7b0a4dea851026a6098f1ade8f2f2b"} err="failed to get container status \"4c36b34b4537cc4dcb71d9581dcad3244b7b0a4dea851026a6098f1ade8f2f2b\": rpc error: code = NotFound desc = could not find container \"4c36b34b4537cc4dcb71d9581dcad3244b7b0a4dea851026a6098f1ade8f2f2b\": container with ID starting with 4c36b34b4537cc4dcb71d9581dcad3244b7b0a4dea851026a6098f1ade8f2f2b not found: ID does not exist" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.215579 4823 scope.go:117] "RemoveContainer" containerID="d0d20704dc8ce42702280f5073226c84b5f1ae3912b827653430f48763c24237" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.215825 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0d20704dc8ce42702280f5073226c84b5f1ae3912b827653430f48763c24237"} err="failed to get container status \"d0d20704dc8ce42702280f5073226c84b5f1ae3912b827653430f48763c24237\": rpc error: code = NotFound desc = could not find container \"d0d20704dc8ce42702280f5073226c84b5f1ae3912b827653430f48763c24237\": container with ID starting with d0d20704dc8ce42702280f5073226c84b5f1ae3912b827653430f48763c24237 not found: ID does not exist" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.266091 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.281246 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.311518 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 07:16:21 crc kubenswrapper[4823]: E1206 07:16:21.311973 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d605a044-9bcd-4e5f-a44f-71cf32706e46" containerName="prometheus" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.311993 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d605a044-9bcd-4e5f-a44f-71cf32706e46" containerName="prometheus" Dec 06 07:16:21 crc kubenswrapper[4823]: E1206 07:16:21.312021 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d605a044-9bcd-4e5f-a44f-71cf32706e46" containerName="config-reloader" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.312029 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d605a044-9bcd-4e5f-a44f-71cf32706e46" containerName="config-reloader" Dec 06 07:16:21 crc kubenswrapper[4823]: E1206 07:16:21.312039 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d605a044-9bcd-4e5f-a44f-71cf32706e46" containerName="thanos-sidecar" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.312045 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d605a044-9bcd-4e5f-a44f-71cf32706e46" containerName="thanos-sidecar" Dec 06 07:16:21 crc kubenswrapper[4823]: E1206 07:16:21.312062 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d605a044-9bcd-4e5f-a44f-71cf32706e46" containerName="init-config-reloader" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.312068 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d605a044-9bcd-4e5f-a44f-71cf32706e46" containerName="init-config-reloader" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.312294 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d605a044-9bcd-4e5f-a44f-71cf32706e46" containerName="thanos-sidecar" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.312317 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d605a044-9bcd-4e5f-a44f-71cf32706e46" containerName="config-reloader" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.312330 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d605a044-9bcd-4e5f-a44f-71cf32706e46" containerName="prometheus" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.314441 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.317479 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.317640 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.317803 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.317954 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-ts4xm" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.320908 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.338086 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.342444 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.474884 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8896582a-b688-4a50-8d29-ff8d5faefb5c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.474983 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8896582a-b688-4a50-8d29-ff8d5faefb5c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.475053 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8896582a-b688-4a50-8d29-ff8d5faefb5c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.475116 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8896582a-b688-4a50-8d29-ff8d5faefb5c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.475156 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8896582a-b688-4a50-8d29-ff8d5faefb5c-config\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.475212 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8896582a-b688-4a50-8d29-ff8d5faefb5c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.475285 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8896582a-b688-4a50-8d29-ff8d5faefb5c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.475341 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54ctc\" (UniqueName: \"kubernetes.io/projected/8896582a-b688-4a50-8d29-ff8d5faefb5c-kube-api-access-54ctc\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.475379 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8896582a-b688-4a50-8d29-ff8d5faefb5c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.475401 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8896582a-b688-4a50-8d29-ff8d5faefb5c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.475473 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.577202 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8896582a-b688-4a50-8d29-ff8d5faefb5c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.577275 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8896582a-b688-4a50-8d29-ff8d5faefb5c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.577327 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.577429 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8896582a-b688-4a50-8d29-ff8d5faefb5c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.577489 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8896582a-b688-4a50-8d29-ff8d5faefb5c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.577525 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8896582a-b688-4a50-8d29-ff8d5faefb5c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.577559 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8896582a-b688-4a50-8d29-ff8d5faefb5c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.577592 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8896582a-b688-4a50-8d29-ff8d5faefb5c-config\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.577639 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8896582a-b688-4a50-8d29-ff8d5faefb5c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.577726 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8896582a-b688-4a50-8d29-ff8d5faefb5c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.577771 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54ctc\" (UniqueName: \"kubernetes.io/projected/8896582a-b688-4a50-8d29-ff8d5faefb5c-kube-api-access-54ctc\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.578948 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8896582a-b688-4a50-8d29-ff8d5faefb5c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.581881 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8896582a-b688-4a50-8d29-ff8d5faefb5c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.582296 4823 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.582332 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fab8261b70d6f995dab453a667c3bae61bb90c651f0d61d1c06bd0698dff1b77/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.582885 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8896582a-b688-4a50-8d29-ff8d5faefb5c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.583586 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8896582a-b688-4a50-8d29-ff8d5faefb5c-config\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.584205 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8896582a-b688-4a50-8d29-ff8d5faefb5c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.584740 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8896582a-b688-4a50-8d29-ff8d5faefb5c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.586174 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8896582a-b688-4a50-8d29-ff8d5faefb5c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.586911 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8896582a-b688-4a50-8d29-ff8d5faefb5c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.594243 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8896582a-b688-4a50-8d29-ff8d5faefb5c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.605376 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54ctc\" (UniqueName: \"kubernetes.io/projected/8896582a-b688-4a50-8d29-ff8d5faefb5c-kube-api-access-54ctc\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.634919 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ee8a238-e9d0-400e-b692-e1979f4545b8\") pod \"prometheus-metric-storage-0\" (UID: \"8896582a-b688-4a50-8d29-ff8d5faefb5c\") " pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:21 crc kubenswrapper[4823]: I1206 07:16:21.692551 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:22 crc kubenswrapper[4823]: I1206 07:16:22.157951 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 07:16:22 crc kubenswrapper[4823]: I1206 07:16:22.955081 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8896582a-b688-4a50-8d29-ff8d5faefb5c","Type":"ContainerStarted","Data":"f50dfd214a000a9138fbfaee2b8e72cc3aa421e7ac118795a139ebc54ba11c9d"} Dec 06 07:16:23 crc kubenswrapper[4823]: I1206 07:16:23.154185 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d605a044-9bcd-4e5f-a44f-71cf32706e46" path="/var/lib/kubelet/pods/d605a044-9bcd-4e5f-a44f-71cf32706e46/volumes" Dec 06 07:16:25 crc kubenswrapper[4823]: I1206 07:16:25.985890 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8896582a-b688-4a50-8d29-ff8d5faefb5c","Type":"ContainerStarted","Data":"267d0443bc138de8fdad474d968b99c9ba389cedebd301c819800c9c42384bc1"} Dec 06 07:16:30 crc kubenswrapper[4823]: I1206 07:16:30.140692 4823 scope.go:117] "RemoveContainer" containerID="0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96" Dec 06 07:16:30 crc kubenswrapper[4823]: E1206 07:16:30.141571 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:16:33 crc kubenswrapper[4823]: I1206 07:16:33.061041 4823 generic.go:334] "Generic (PLEG): container finished" podID="8896582a-b688-4a50-8d29-ff8d5faefb5c" containerID="267d0443bc138de8fdad474d968b99c9ba389cedebd301c819800c9c42384bc1" exitCode=0 Dec 06 07:16:33 crc kubenswrapper[4823]: I1206 07:16:33.061152 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8896582a-b688-4a50-8d29-ff8d5faefb5c","Type":"ContainerDied","Data":"267d0443bc138de8fdad474d968b99c9ba389cedebd301c819800c9c42384bc1"} Dec 06 07:16:34 crc kubenswrapper[4823]: I1206 07:16:34.072172 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8896582a-b688-4a50-8d29-ff8d5faefb5c","Type":"ContainerStarted","Data":"c4b9926170600eff07f54c1e8b078cdd6403e53fd99da4a32af5e8902598bcb9"} Dec 06 07:16:38 crc kubenswrapper[4823]: I1206 07:16:38.129643 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8896582a-b688-4a50-8d29-ff8d5faefb5c","Type":"ContainerStarted","Data":"afba137fbf180abcd73880984cc5a1522293d25d9040615d7d402b77567c24be"} Dec 06 07:16:38 crc kubenswrapper[4823]: I1206 07:16:38.130614 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8896582a-b688-4a50-8d29-ff8d5faefb5c","Type":"ContainerStarted","Data":"be42c2ef15b39bc52e69c91e0fbf14f144a76ead13fbc87d9d1a6ffcd1002f59"} Dec 06 07:16:38 crc kubenswrapper[4823]: I1206 07:16:38.163771 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.163745581 podStartE2EDuration="17.163745581s" podCreationTimestamp="2025-12-06 07:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:16:38.157001615 +0000 UTC m=+3099.442753575" watchObservedRunningTime="2025-12-06 07:16:38.163745581 +0000 UTC m=+3099.449497541" Dec 06 07:16:41 crc kubenswrapper[4823]: I1206 07:16:41.141379 4823 scope.go:117] "RemoveContainer" containerID="0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96" Dec 06 07:16:41 crc kubenswrapper[4823]: E1206 07:16:41.141979 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:16:41 crc kubenswrapper[4823]: I1206 07:16:41.693579 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:51 crc kubenswrapper[4823]: I1206 07:16:51.693202 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:51 crc kubenswrapper[4823]: I1206 07:16:51.753803 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:52 crc kubenswrapper[4823]: I1206 07:16:52.325897 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 06 07:16:55 crc kubenswrapper[4823]: I1206 07:16:55.140623 4823 scope.go:117] "RemoveContainer" containerID="0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96" Dec 06 07:16:55 crc kubenswrapper[4823]: E1206 07:16:55.141455 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:17:09 crc kubenswrapper[4823]: I1206 07:17:09.152241 4823 scope.go:117] "RemoveContainer" containerID="0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96" Dec 06 07:17:09 crc kubenswrapper[4823]: E1206 07:17:09.153064 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.227899 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.230251 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.235093 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-brd8d" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.235464 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.235764 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.236011 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.239902 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.396364 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-config-data\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.396422 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.396481 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.396577 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.396600 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.396621 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.396826 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mm7t\" (UniqueName: \"kubernetes.io/projected/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-kube-api-access-5mm7t\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.396868 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.396907 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.498727 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.499005 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.499047 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-config-data\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.499064 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.499098 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.499170 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.499188 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.499210 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.499248 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mm7t\" (UniqueName: \"kubernetes.io/projected/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-kube-api-access-5mm7t\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.499427 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.499863 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.500139 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.500682 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.501143 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-config-data\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.507800 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.516561 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.517236 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.520552 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mm7t\" (UniqueName: \"kubernetes.io/projected/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-kube-api-access-5mm7t\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.550304 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " pod="openstack/tempest-tests-tempest" Dec 06 07:17:15 crc kubenswrapper[4823]: I1206 07:17:15.558812 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 06 07:17:16 crc kubenswrapper[4823]: I1206 07:17:16.108572 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 06 07:17:16 crc kubenswrapper[4823]: I1206 07:17:16.577938 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"bc939bd4-7c0b-4783-a90c-cb9791a86c9f","Type":"ContainerStarted","Data":"477fc55e1f01b0d033be1e6efbdc7fce259b42688790ae984dfbe4f4e94cfdca"} Dec 06 07:17:20 crc kubenswrapper[4823]: I1206 07:17:20.141151 4823 scope.go:117] "RemoveContainer" containerID="0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96" Dec 06 07:17:20 crc kubenswrapper[4823]: E1206 07:17:20.141636 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:17:29 crc kubenswrapper[4823]: I1206 07:17:29.724241 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"bc939bd4-7c0b-4783-a90c-cb9791a86c9f","Type":"ContainerStarted","Data":"9fde670becde100a33b05eb62ecbed10be4a6dbd7b2a06c9ea6e5482580a148d"} Dec 06 07:17:29 crc kubenswrapper[4823]: I1206 07:17:29.753134 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.702492735 podStartE2EDuration="15.753101057s" podCreationTimestamp="2025-12-06 07:17:14 +0000 UTC" firstStartedPulling="2025-12-06 07:17:16.118713691 +0000 UTC m=+3137.404465651" lastFinishedPulling="2025-12-06 07:17:28.169322013 +0000 UTC m=+3149.455073973" observedRunningTime="2025-12-06 07:17:29.74594453 +0000 UTC m=+3151.031696490" watchObservedRunningTime="2025-12-06 07:17:29.753101057 +0000 UTC m=+3151.038853017" Dec 06 07:17:35 crc kubenswrapper[4823]: I1206 07:17:35.140714 4823 scope.go:117] "RemoveContainer" containerID="0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96" Dec 06 07:17:35 crc kubenswrapper[4823]: E1206 07:17:35.141496 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:17:50 crc kubenswrapper[4823]: I1206 07:17:50.141893 4823 scope.go:117] "RemoveContainer" containerID="0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96" Dec 06 07:17:50 crc kubenswrapper[4823]: E1206 07:17:50.142849 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:18:04 crc kubenswrapper[4823]: I1206 07:18:04.141337 4823 scope.go:117] "RemoveContainer" containerID="0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96" Dec 06 07:18:04 crc kubenswrapper[4823]: E1206 07:18:04.142214 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:18:18 crc kubenswrapper[4823]: I1206 07:18:18.141207 4823 scope.go:117] "RemoveContainer" containerID="0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96" Dec 06 07:18:18 crc kubenswrapper[4823]: E1206 07:18:18.141933 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:18:31 crc kubenswrapper[4823]: I1206 07:18:31.140955 4823 scope.go:117] "RemoveContainer" containerID="0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96" Dec 06 07:18:31 crc kubenswrapper[4823]: E1206 07:18:31.141747 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:18:44 crc kubenswrapper[4823]: I1206 07:18:44.141173 4823 scope.go:117] "RemoveContainer" containerID="0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96" Dec 06 07:18:44 crc kubenswrapper[4823]: E1206 07:18:44.141904 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:18:57 crc kubenswrapper[4823]: I1206 07:18:57.141860 4823 scope.go:117] "RemoveContainer" containerID="0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96" Dec 06 07:18:57 crc kubenswrapper[4823]: E1206 07:18:57.142500 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:19:10 crc kubenswrapper[4823]: I1206 07:19:10.141306 4823 scope.go:117] "RemoveContainer" containerID="0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96" Dec 06 07:19:10 crc kubenswrapper[4823]: E1206 07:19:10.143071 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:19:24 crc kubenswrapper[4823]: I1206 07:19:24.141806 4823 scope.go:117] "RemoveContainer" containerID="0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96" Dec 06 07:19:24 crc kubenswrapper[4823]: E1206 07:19:24.142926 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:19:39 crc kubenswrapper[4823]: I1206 07:19:39.149175 4823 scope.go:117] "RemoveContainer" containerID="0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96" Dec 06 07:19:40 crc kubenswrapper[4823]: I1206 07:19:40.109085 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerStarted","Data":"586a74a448f82acac41b8e54bb568e0eb3040601caa978dce2662a8d7af685c7"} Dec 06 07:22:06 crc kubenswrapper[4823]: I1206 07:22:06.052342 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:22:06 crc kubenswrapper[4823]: I1206 07:22:06.052897 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:22:20 crc kubenswrapper[4823]: I1206 07:22:20.194994 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dkbbx"] Dec 06 07:22:20 crc kubenswrapper[4823]: I1206 07:22:20.198487 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dkbbx" Dec 06 07:22:20 crc kubenswrapper[4823]: I1206 07:22:20.222921 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dkbbx"] Dec 06 07:22:20 crc kubenswrapper[4823]: I1206 07:22:20.296484 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56bd2ad7-3e95-46b9-bda2-fed4a5ba0496-utilities\") pod \"redhat-operators-dkbbx\" (UID: \"56bd2ad7-3e95-46b9-bda2-fed4a5ba0496\") " pod="openshift-marketplace/redhat-operators-dkbbx" Dec 06 07:22:20 crc kubenswrapper[4823]: I1206 07:22:20.297580 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56bd2ad7-3e95-46b9-bda2-fed4a5ba0496-catalog-content\") pod \"redhat-operators-dkbbx\" (UID: \"56bd2ad7-3e95-46b9-bda2-fed4a5ba0496\") " pod="openshift-marketplace/redhat-operators-dkbbx" Dec 06 07:22:20 crc kubenswrapper[4823]: I1206 07:22:20.297922 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jstbn\" (UniqueName: \"kubernetes.io/projected/56bd2ad7-3e95-46b9-bda2-fed4a5ba0496-kube-api-access-jstbn\") pod \"redhat-operators-dkbbx\" (UID: \"56bd2ad7-3e95-46b9-bda2-fed4a5ba0496\") " pod="openshift-marketplace/redhat-operators-dkbbx" Dec 06 07:22:20 crc kubenswrapper[4823]: I1206 07:22:20.399963 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56bd2ad7-3e95-46b9-bda2-fed4a5ba0496-utilities\") pod \"redhat-operators-dkbbx\" (UID: \"56bd2ad7-3e95-46b9-bda2-fed4a5ba0496\") " pod="openshift-marketplace/redhat-operators-dkbbx" Dec 06 07:22:20 crc kubenswrapper[4823]: I1206 07:22:20.400434 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56bd2ad7-3e95-46b9-bda2-fed4a5ba0496-catalog-content\") pod \"redhat-operators-dkbbx\" (UID: \"56bd2ad7-3e95-46b9-bda2-fed4a5ba0496\") " pod="openshift-marketplace/redhat-operators-dkbbx" Dec 06 07:22:20 crc kubenswrapper[4823]: I1206 07:22:20.400547 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jstbn\" (UniqueName: \"kubernetes.io/projected/56bd2ad7-3e95-46b9-bda2-fed4a5ba0496-kube-api-access-jstbn\") pod \"redhat-operators-dkbbx\" (UID: \"56bd2ad7-3e95-46b9-bda2-fed4a5ba0496\") " pod="openshift-marketplace/redhat-operators-dkbbx" Dec 06 07:22:20 crc kubenswrapper[4823]: I1206 07:22:20.400779 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56bd2ad7-3e95-46b9-bda2-fed4a5ba0496-utilities\") pod \"redhat-operators-dkbbx\" (UID: \"56bd2ad7-3e95-46b9-bda2-fed4a5ba0496\") " pod="openshift-marketplace/redhat-operators-dkbbx" Dec 06 07:22:20 crc kubenswrapper[4823]: I1206 07:22:20.401338 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56bd2ad7-3e95-46b9-bda2-fed4a5ba0496-catalog-content\") pod \"redhat-operators-dkbbx\" (UID: \"56bd2ad7-3e95-46b9-bda2-fed4a5ba0496\") " pod="openshift-marketplace/redhat-operators-dkbbx" Dec 06 07:22:20 crc kubenswrapper[4823]: I1206 07:22:20.424078 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jstbn\" (UniqueName: \"kubernetes.io/projected/56bd2ad7-3e95-46b9-bda2-fed4a5ba0496-kube-api-access-jstbn\") pod \"redhat-operators-dkbbx\" (UID: \"56bd2ad7-3e95-46b9-bda2-fed4a5ba0496\") " pod="openshift-marketplace/redhat-operators-dkbbx" Dec 06 07:22:20 crc kubenswrapper[4823]: I1206 07:22:20.527103 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dkbbx" Dec 06 07:22:21 crc kubenswrapper[4823]: I1206 07:22:21.342253 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dkbbx"] Dec 06 07:22:21 crc kubenswrapper[4823]: I1206 07:22:21.894363 4823 generic.go:334] "Generic (PLEG): container finished" podID="56bd2ad7-3e95-46b9-bda2-fed4a5ba0496" containerID="c2c0a883405538acd236b1c0b272477bd5b5725824ed7bdab937acc03fdbc977" exitCode=0 Dec 06 07:22:21 crc kubenswrapper[4823]: I1206 07:22:21.894441 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkbbx" event={"ID":"56bd2ad7-3e95-46b9-bda2-fed4a5ba0496","Type":"ContainerDied","Data":"c2c0a883405538acd236b1c0b272477bd5b5725824ed7bdab937acc03fdbc977"} Dec 06 07:22:21 crc kubenswrapper[4823]: I1206 07:22:21.894777 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkbbx" event={"ID":"56bd2ad7-3e95-46b9-bda2-fed4a5ba0496","Type":"ContainerStarted","Data":"295cab213b20e22e1cc62a93650f8c6f1b11d9070a0e0ef2d97c47f9c6f8dca3"} Dec 06 07:22:21 crc kubenswrapper[4823]: I1206 07:22:21.896972 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 07:22:22 crc kubenswrapper[4823]: I1206 07:22:22.905489 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkbbx" event={"ID":"56bd2ad7-3e95-46b9-bda2-fed4a5ba0496","Type":"ContainerStarted","Data":"5a83307effc3c72c91f8f9dd5282261c3510a75b79794e9bd841e3a4be8d853a"} Dec 06 07:22:24 crc kubenswrapper[4823]: I1206 07:22:24.924467 4823 generic.go:334] "Generic (PLEG): container finished" podID="56bd2ad7-3e95-46b9-bda2-fed4a5ba0496" containerID="5a83307effc3c72c91f8f9dd5282261c3510a75b79794e9bd841e3a4be8d853a" exitCode=0 Dec 06 07:22:24 crc kubenswrapper[4823]: I1206 07:22:24.924685 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkbbx" event={"ID":"56bd2ad7-3e95-46b9-bda2-fed4a5ba0496","Type":"ContainerDied","Data":"5a83307effc3c72c91f8f9dd5282261c3510a75b79794e9bd841e3a4be8d853a"} Dec 06 07:22:25 crc kubenswrapper[4823]: I1206 07:22:25.936092 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkbbx" event={"ID":"56bd2ad7-3e95-46b9-bda2-fed4a5ba0496","Type":"ContainerStarted","Data":"affe5cfde90dc382ce4515c8ccb1971c975b792ded6874246bbec6522ebcc4bb"} Dec 06 07:22:25 crc kubenswrapper[4823]: I1206 07:22:25.953510 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dkbbx" podStartSLOduration=2.52436463 podStartE2EDuration="5.953476004s" podCreationTimestamp="2025-12-06 07:22:20 +0000 UTC" firstStartedPulling="2025-12-06 07:22:21.896676675 +0000 UTC m=+3443.182428635" lastFinishedPulling="2025-12-06 07:22:25.325788049 +0000 UTC m=+3446.611540009" observedRunningTime="2025-12-06 07:22:25.952212867 +0000 UTC m=+3447.237964827" watchObservedRunningTime="2025-12-06 07:22:25.953476004 +0000 UTC m=+3447.239227974" Dec 06 07:22:30 crc kubenswrapper[4823]: I1206 07:22:30.527458 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dkbbx" Dec 06 07:22:30 crc kubenswrapper[4823]: I1206 07:22:30.528040 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dkbbx" Dec 06 07:22:31 crc kubenswrapper[4823]: I1206 07:22:31.585639 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dkbbx" podUID="56bd2ad7-3e95-46b9-bda2-fed4a5ba0496" containerName="registry-server" probeResult="failure" output=< Dec 06 07:22:31 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Dec 06 07:22:31 crc kubenswrapper[4823]: > Dec 06 07:22:36 crc kubenswrapper[4823]: I1206 07:22:36.052372 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:22:36 crc kubenswrapper[4823]: I1206 07:22:36.053044 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:22:40 crc kubenswrapper[4823]: I1206 07:22:40.583425 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dkbbx" Dec 06 07:22:40 crc kubenswrapper[4823]: I1206 07:22:40.637358 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dkbbx" Dec 06 07:22:40 crc kubenswrapper[4823]: I1206 07:22:40.818152 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dkbbx"] Dec 06 07:22:42 crc kubenswrapper[4823]: I1206 07:22:42.106492 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dkbbx" podUID="56bd2ad7-3e95-46b9-bda2-fed4a5ba0496" containerName="registry-server" containerID="cri-o://affe5cfde90dc382ce4515c8ccb1971c975b792ded6874246bbec6522ebcc4bb" gracePeriod=2 Dec 06 07:22:42 crc kubenswrapper[4823]: I1206 07:22:42.496738 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dkbbx" Dec 06 07:22:42 crc kubenswrapper[4823]: I1206 07:22:42.515022 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56bd2ad7-3e95-46b9-bda2-fed4a5ba0496-utilities\") pod \"56bd2ad7-3e95-46b9-bda2-fed4a5ba0496\" (UID: \"56bd2ad7-3e95-46b9-bda2-fed4a5ba0496\") " Dec 06 07:22:42 crc kubenswrapper[4823]: I1206 07:22:42.515069 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56bd2ad7-3e95-46b9-bda2-fed4a5ba0496-catalog-content\") pod \"56bd2ad7-3e95-46b9-bda2-fed4a5ba0496\" (UID: \"56bd2ad7-3e95-46b9-bda2-fed4a5ba0496\") " Dec 06 07:22:42 crc kubenswrapper[4823]: I1206 07:22:42.515090 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jstbn\" (UniqueName: \"kubernetes.io/projected/56bd2ad7-3e95-46b9-bda2-fed4a5ba0496-kube-api-access-jstbn\") pod \"56bd2ad7-3e95-46b9-bda2-fed4a5ba0496\" (UID: \"56bd2ad7-3e95-46b9-bda2-fed4a5ba0496\") " Dec 06 07:22:42 crc kubenswrapper[4823]: I1206 07:22:42.516823 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56bd2ad7-3e95-46b9-bda2-fed4a5ba0496-utilities" (OuterVolumeSpecName: "utilities") pod "56bd2ad7-3e95-46b9-bda2-fed4a5ba0496" (UID: "56bd2ad7-3e95-46b9-bda2-fed4a5ba0496"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:22:42 crc kubenswrapper[4823]: I1206 07:22:42.521984 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56bd2ad7-3e95-46b9-bda2-fed4a5ba0496-kube-api-access-jstbn" (OuterVolumeSpecName: "kube-api-access-jstbn") pod "56bd2ad7-3e95-46b9-bda2-fed4a5ba0496" (UID: "56bd2ad7-3e95-46b9-bda2-fed4a5ba0496"). InnerVolumeSpecName "kube-api-access-jstbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:22:42 crc kubenswrapper[4823]: I1206 07:22:42.617112 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jstbn\" (UniqueName: \"kubernetes.io/projected/56bd2ad7-3e95-46b9-bda2-fed4a5ba0496-kube-api-access-jstbn\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:42 crc kubenswrapper[4823]: I1206 07:22:42.617159 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56bd2ad7-3e95-46b9-bda2-fed4a5ba0496-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:42 crc kubenswrapper[4823]: I1206 07:22:42.653946 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56bd2ad7-3e95-46b9-bda2-fed4a5ba0496-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56bd2ad7-3e95-46b9-bda2-fed4a5ba0496" (UID: "56bd2ad7-3e95-46b9-bda2-fed4a5ba0496"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:22:42 crc kubenswrapper[4823]: I1206 07:22:42.719248 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56bd2ad7-3e95-46b9-bda2-fed4a5ba0496-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:22:43 crc kubenswrapper[4823]: I1206 07:22:43.117757 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dkbbx" Dec 06 07:22:43 crc kubenswrapper[4823]: I1206 07:22:43.117760 4823 generic.go:334] "Generic (PLEG): container finished" podID="56bd2ad7-3e95-46b9-bda2-fed4a5ba0496" containerID="affe5cfde90dc382ce4515c8ccb1971c975b792ded6874246bbec6522ebcc4bb" exitCode=0 Dec 06 07:22:43 crc kubenswrapper[4823]: I1206 07:22:43.117804 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkbbx" event={"ID":"56bd2ad7-3e95-46b9-bda2-fed4a5ba0496","Type":"ContainerDied","Data":"affe5cfde90dc382ce4515c8ccb1971c975b792ded6874246bbec6522ebcc4bb"} Dec 06 07:22:43 crc kubenswrapper[4823]: I1206 07:22:43.118255 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkbbx" event={"ID":"56bd2ad7-3e95-46b9-bda2-fed4a5ba0496","Type":"ContainerDied","Data":"295cab213b20e22e1cc62a93650f8c6f1b11d9070a0e0ef2d97c47f9c6f8dca3"} Dec 06 07:22:43 crc kubenswrapper[4823]: I1206 07:22:43.118275 4823 scope.go:117] "RemoveContainer" containerID="affe5cfde90dc382ce4515c8ccb1971c975b792ded6874246bbec6522ebcc4bb" Dec 06 07:22:43 crc kubenswrapper[4823]: I1206 07:22:43.140462 4823 scope.go:117] "RemoveContainer" containerID="5a83307effc3c72c91f8f9dd5282261c3510a75b79794e9bd841e3a4be8d853a" Dec 06 07:22:43 crc kubenswrapper[4823]: I1206 07:22:43.163357 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dkbbx"] Dec 06 07:22:43 crc kubenswrapper[4823]: I1206 07:22:43.170835 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dkbbx"] Dec 06 07:22:43 crc kubenswrapper[4823]: I1206 07:22:43.174622 4823 scope.go:117] "RemoveContainer" containerID="c2c0a883405538acd236b1c0b272477bd5b5725824ed7bdab937acc03fdbc977" Dec 06 07:22:43 crc kubenswrapper[4823]: I1206 07:22:43.221685 4823 scope.go:117] "RemoveContainer" containerID="affe5cfde90dc382ce4515c8ccb1971c975b792ded6874246bbec6522ebcc4bb" Dec 06 07:22:43 crc kubenswrapper[4823]: E1206 07:22:43.222292 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"affe5cfde90dc382ce4515c8ccb1971c975b792ded6874246bbec6522ebcc4bb\": container with ID starting with affe5cfde90dc382ce4515c8ccb1971c975b792ded6874246bbec6522ebcc4bb not found: ID does not exist" containerID="affe5cfde90dc382ce4515c8ccb1971c975b792ded6874246bbec6522ebcc4bb" Dec 06 07:22:43 crc kubenswrapper[4823]: I1206 07:22:43.222357 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"affe5cfde90dc382ce4515c8ccb1971c975b792ded6874246bbec6522ebcc4bb"} err="failed to get container status \"affe5cfde90dc382ce4515c8ccb1971c975b792ded6874246bbec6522ebcc4bb\": rpc error: code = NotFound desc = could not find container \"affe5cfde90dc382ce4515c8ccb1971c975b792ded6874246bbec6522ebcc4bb\": container with ID starting with affe5cfde90dc382ce4515c8ccb1971c975b792ded6874246bbec6522ebcc4bb not found: ID does not exist" Dec 06 07:22:43 crc kubenswrapper[4823]: I1206 07:22:43.222437 4823 scope.go:117] "RemoveContainer" containerID="5a83307effc3c72c91f8f9dd5282261c3510a75b79794e9bd841e3a4be8d853a" Dec 06 07:22:43 crc kubenswrapper[4823]: E1206 07:22:43.222886 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a83307effc3c72c91f8f9dd5282261c3510a75b79794e9bd841e3a4be8d853a\": container with ID starting with 5a83307effc3c72c91f8f9dd5282261c3510a75b79794e9bd841e3a4be8d853a not found: ID does not exist" containerID="5a83307effc3c72c91f8f9dd5282261c3510a75b79794e9bd841e3a4be8d853a" Dec 06 07:22:43 crc kubenswrapper[4823]: I1206 07:22:43.222933 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a83307effc3c72c91f8f9dd5282261c3510a75b79794e9bd841e3a4be8d853a"} err="failed to get container status \"5a83307effc3c72c91f8f9dd5282261c3510a75b79794e9bd841e3a4be8d853a\": rpc error: code = NotFound desc = could not find container \"5a83307effc3c72c91f8f9dd5282261c3510a75b79794e9bd841e3a4be8d853a\": container with ID starting with 5a83307effc3c72c91f8f9dd5282261c3510a75b79794e9bd841e3a4be8d853a not found: ID does not exist" Dec 06 07:22:43 crc kubenswrapper[4823]: I1206 07:22:43.222964 4823 scope.go:117] "RemoveContainer" containerID="c2c0a883405538acd236b1c0b272477bd5b5725824ed7bdab937acc03fdbc977" Dec 06 07:22:43 crc kubenswrapper[4823]: E1206 07:22:43.223345 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2c0a883405538acd236b1c0b272477bd5b5725824ed7bdab937acc03fdbc977\": container with ID starting with c2c0a883405538acd236b1c0b272477bd5b5725824ed7bdab937acc03fdbc977 not found: ID does not exist" containerID="c2c0a883405538acd236b1c0b272477bd5b5725824ed7bdab937acc03fdbc977" Dec 06 07:22:43 crc kubenswrapper[4823]: I1206 07:22:43.223410 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2c0a883405538acd236b1c0b272477bd5b5725824ed7bdab937acc03fdbc977"} err="failed to get container status \"c2c0a883405538acd236b1c0b272477bd5b5725824ed7bdab937acc03fdbc977\": rpc error: code = NotFound desc = could not find container \"c2c0a883405538acd236b1c0b272477bd5b5725824ed7bdab937acc03fdbc977\": container with ID starting with c2c0a883405538acd236b1c0b272477bd5b5725824ed7bdab937acc03fdbc977 not found: ID does not exist" Dec 06 07:22:45 crc kubenswrapper[4823]: I1206 07:22:45.153111 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56bd2ad7-3e95-46b9-bda2-fed4a5ba0496" path="/var/lib/kubelet/pods/56bd2ad7-3e95-46b9-bda2-fed4a5ba0496/volumes" Dec 06 07:23:06 crc kubenswrapper[4823]: I1206 07:23:06.052223 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:23:06 crc kubenswrapper[4823]: I1206 07:23:06.052832 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:23:06 crc kubenswrapper[4823]: I1206 07:23:06.052887 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 07:23:06 crc kubenswrapper[4823]: I1206 07:23:06.053794 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"586a74a448f82acac41b8e54bb568e0eb3040601caa978dce2662a8d7af685c7"} pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:23:06 crc kubenswrapper[4823]: I1206 07:23:06.053851 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" containerID="cri-o://586a74a448f82acac41b8e54bb568e0eb3040601caa978dce2662a8d7af685c7" gracePeriod=600 Dec 06 07:23:06 crc kubenswrapper[4823]: I1206 07:23:06.339467 4823 generic.go:334] "Generic (PLEG): container finished" podID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerID="586a74a448f82acac41b8e54bb568e0eb3040601caa978dce2662a8d7af685c7" exitCode=0 Dec 06 07:23:06 crc kubenswrapper[4823]: I1206 07:23:06.339541 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerDied","Data":"586a74a448f82acac41b8e54bb568e0eb3040601caa978dce2662a8d7af685c7"} Dec 06 07:23:06 crc kubenswrapper[4823]: I1206 07:23:06.339751 4823 scope.go:117] "RemoveContainer" containerID="0b680f927c0cff8ad990783a5eb1b16ca5e2acd470e292c9e33b1979f57bbc96" Dec 06 07:23:07 crc kubenswrapper[4823]: I1206 07:23:07.350280 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerStarted","Data":"ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62"} Dec 06 07:24:10 crc kubenswrapper[4823]: I1206 07:24:10.171002 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hjgf7"] Dec 06 07:24:10 crc kubenswrapper[4823]: E1206 07:24:10.172461 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56bd2ad7-3e95-46b9-bda2-fed4a5ba0496" containerName="registry-server" Dec 06 07:24:10 crc kubenswrapper[4823]: I1206 07:24:10.172483 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="56bd2ad7-3e95-46b9-bda2-fed4a5ba0496" containerName="registry-server" Dec 06 07:24:10 crc kubenswrapper[4823]: E1206 07:24:10.172512 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56bd2ad7-3e95-46b9-bda2-fed4a5ba0496" containerName="extract-utilities" Dec 06 07:24:10 crc kubenswrapper[4823]: I1206 07:24:10.172521 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="56bd2ad7-3e95-46b9-bda2-fed4a5ba0496" containerName="extract-utilities" Dec 06 07:24:10 crc kubenswrapper[4823]: E1206 07:24:10.172541 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56bd2ad7-3e95-46b9-bda2-fed4a5ba0496" containerName="extract-content" Dec 06 07:24:10 crc kubenswrapper[4823]: I1206 07:24:10.172548 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="56bd2ad7-3e95-46b9-bda2-fed4a5ba0496" containerName="extract-content" Dec 06 07:24:10 crc kubenswrapper[4823]: I1206 07:24:10.172810 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="56bd2ad7-3e95-46b9-bda2-fed4a5ba0496" containerName="registry-server" Dec 06 07:24:10 crc kubenswrapper[4823]: I1206 07:24:10.174902 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hjgf7" Dec 06 07:24:10 crc kubenswrapper[4823]: I1206 07:24:10.184629 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hjgf7"] Dec 06 07:24:10 crc kubenswrapper[4823]: I1206 07:24:10.326874 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5910a23-5bdd-4425-aa9c-70824512dc52-catalog-content\") pod \"redhat-marketplace-hjgf7\" (UID: \"d5910a23-5bdd-4425-aa9c-70824512dc52\") " pod="openshift-marketplace/redhat-marketplace-hjgf7" Dec 06 07:24:10 crc kubenswrapper[4823]: I1206 07:24:10.327152 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf5fj\" (UniqueName: \"kubernetes.io/projected/d5910a23-5bdd-4425-aa9c-70824512dc52-kube-api-access-gf5fj\") pod \"redhat-marketplace-hjgf7\" (UID: \"d5910a23-5bdd-4425-aa9c-70824512dc52\") " pod="openshift-marketplace/redhat-marketplace-hjgf7" Dec 06 07:24:10 crc kubenswrapper[4823]: I1206 07:24:10.327294 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5910a23-5bdd-4425-aa9c-70824512dc52-utilities\") pod \"redhat-marketplace-hjgf7\" (UID: \"d5910a23-5bdd-4425-aa9c-70824512dc52\") " pod="openshift-marketplace/redhat-marketplace-hjgf7" Dec 06 07:24:10 crc kubenswrapper[4823]: I1206 07:24:10.428975 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf5fj\" (UniqueName: \"kubernetes.io/projected/d5910a23-5bdd-4425-aa9c-70824512dc52-kube-api-access-gf5fj\") pod \"redhat-marketplace-hjgf7\" (UID: \"d5910a23-5bdd-4425-aa9c-70824512dc52\") " pod="openshift-marketplace/redhat-marketplace-hjgf7" Dec 06 07:24:10 crc kubenswrapper[4823]: I1206 07:24:10.429088 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5910a23-5bdd-4425-aa9c-70824512dc52-utilities\") pod \"redhat-marketplace-hjgf7\" (UID: \"d5910a23-5bdd-4425-aa9c-70824512dc52\") " pod="openshift-marketplace/redhat-marketplace-hjgf7" Dec 06 07:24:10 crc kubenswrapper[4823]: I1206 07:24:10.429167 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5910a23-5bdd-4425-aa9c-70824512dc52-catalog-content\") pod \"redhat-marketplace-hjgf7\" (UID: \"d5910a23-5bdd-4425-aa9c-70824512dc52\") " pod="openshift-marketplace/redhat-marketplace-hjgf7" Dec 06 07:24:10 crc kubenswrapper[4823]: I1206 07:24:10.429757 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5910a23-5bdd-4425-aa9c-70824512dc52-utilities\") pod \"redhat-marketplace-hjgf7\" (UID: \"d5910a23-5bdd-4425-aa9c-70824512dc52\") " pod="openshift-marketplace/redhat-marketplace-hjgf7" Dec 06 07:24:10 crc kubenswrapper[4823]: I1206 07:24:10.429824 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5910a23-5bdd-4425-aa9c-70824512dc52-catalog-content\") pod \"redhat-marketplace-hjgf7\" (UID: \"d5910a23-5bdd-4425-aa9c-70824512dc52\") " pod="openshift-marketplace/redhat-marketplace-hjgf7" Dec 06 07:24:10 crc kubenswrapper[4823]: I1206 07:24:10.450915 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf5fj\" (UniqueName: \"kubernetes.io/projected/d5910a23-5bdd-4425-aa9c-70824512dc52-kube-api-access-gf5fj\") pod \"redhat-marketplace-hjgf7\" (UID: \"d5910a23-5bdd-4425-aa9c-70824512dc52\") " pod="openshift-marketplace/redhat-marketplace-hjgf7" Dec 06 07:24:10 crc kubenswrapper[4823]: I1206 07:24:10.495227 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hjgf7" Dec 06 07:24:10 crc kubenswrapper[4823]: I1206 07:24:10.975810 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hjgf7"] Dec 06 07:24:11 crc kubenswrapper[4823]: I1206 07:24:11.106333 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjgf7" event={"ID":"d5910a23-5bdd-4425-aa9c-70824512dc52","Type":"ContainerStarted","Data":"55aca1f52b0d2c6a1163f9ad265163d772623261884facc91ba38cb7555d86fc"} Dec 06 07:24:12 crc kubenswrapper[4823]: I1206 07:24:12.116748 4823 generic.go:334] "Generic (PLEG): container finished" podID="d5910a23-5bdd-4425-aa9c-70824512dc52" containerID="fb4ee2f0878a1cfa895b190d714070b843ff9cf3fbf5a0c01a1b065dae8fa765" exitCode=0 Dec 06 07:24:12 crc kubenswrapper[4823]: I1206 07:24:12.116833 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjgf7" event={"ID":"d5910a23-5bdd-4425-aa9c-70824512dc52","Type":"ContainerDied","Data":"fb4ee2f0878a1cfa895b190d714070b843ff9cf3fbf5a0c01a1b065dae8fa765"} Dec 06 07:24:13 crc kubenswrapper[4823]: I1206 07:24:13.127514 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjgf7" event={"ID":"d5910a23-5bdd-4425-aa9c-70824512dc52","Type":"ContainerStarted","Data":"4b9207bb702b5e573294b6c250122c0d7e2addd472cfdfb53e21543791b0d826"} Dec 06 07:24:14 crc kubenswrapper[4823]: I1206 07:24:14.138632 4823 generic.go:334] "Generic (PLEG): container finished" podID="d5910a23-5bdd-4425-aa9c-70824512dc52" containerID="4b9207bb702b5e573294b6c250122c0d7e2addd472cfdfb53e21543791b0d826" exitCode=0 Dec 06 07:24:14 crc kubenswrapper[4823]: I1206 07:24:14.138748 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjgf7" event={"ID":"d5910a23-5bdd-4425-aa9c-70824512dc52","Type":"ContainerDied","Data":"4b9207bb702b5e573294b6c250122c0d7e2addd472cfdfb53e21543791b0d826"} Dec 06 07:24:15 crc kubenswrapper[4823]: I1206 07:24:15.150772 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjgf7" event={"ID":"d5910a23-5bdd-4425-aa9c-70824512dc52","Type":"ContainerStarted","Data":"4e1226e9774a429a668f9d118d833f90662fae147d48c49aaa98b981dec4bef2"} Dec 06 07:24:15 crc kubenswrapper[4823]: I1206 07:24:15.176505 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hjgf7" podStartSLOduration=2.666909821 podStartE2EDuration="5.176468296s" podCreationTimestamp="2025-12-06 07:24:10 +0000 UTC" firstStartedPulling="2025-12-06 07:24:12.118578069 +0000 UTC m=+3553.404330029" lastFinishedPulling="2025-12-06 07:24:14.628136544 +0000 UTC m=+3555.913888504" observedRunningTime="2025-12-06 07:24:15.169147934 +0000 UTC m=+3556.454899884" watchObservedRunningTime="2025-12-06 07:24:15.176468296 +0000 UTC m=+3556.462220266" Dec 06 07:24:19 crc kubenswrapper[4823]: I1206 07:24:19.618129 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gl96n"] Dec 06 07:24:19 crc kubenswrapper[4823]: I1206 07:24:19.621293 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gl96n" Dec 06 07:24:19 crc kubenswrapper[4823]: I1206 07:24:19.635433 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gl96n"] Dec 06 07:24:19 crc kubenswrapper[4823]: I1206 07:24:19.804399 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bfc0b86-2fe3-4e12-b105-7614d17f1eab-catalog-content\") pod \"certified-operators-gl96n\" (UID: \"8bfc0b86-2fe3-4e12-b105-7614d17f1eab\") " pod="openshift-marketplace/certified-operators-gl96n" Dec 06 07:24:19 crc kubenswrapper[4823]: I1206 07:24:19.804580 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bfc0b86-2fe3-4e12-b105-7614d17f1eab-utilities\") pod \"certified-operators-gl96n\" (UID: \"8bfc0b86-2fe3-4e12-b105-7614d17f1eab\") " pod="openshift-marketplace/certified-operators-gl96n" Dec 06 07:24:19 crc kubenswrapper[4823]: I1206 07:24:19.804681 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hbff\" (UniqueName: \"kubernetes.io/projected/8bfc0b86-2fe3-4e12-b105-7614d17f1eab-kube-api-access-8hbff\") pod \"certified-operators-gl96n\" (UID: \"8bfc0b86-2fe3-4e12-b105-7614d17f1eab\") " pod="openshift-marketplace/certified-operators-gl96n" Dec 06 07:24:19 crc kubenswrapper[4823]: I1206 07:24:19.906601 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bfc0b86-2fe3-4e12-b105-7614d17f1eab-catalog-content\") pod \"certified-operators-gl96n\" (UID: \"8bfc0b86-2fe3-4e12-b105-7614d17f1eab\") " pod="openshift-marketplace/certified-operators-gl96n" Dec 06 07:24:19 crc kubenswrapper[4823]: I1206 07:24:19.906725 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bfc0b86-2fe3-4e12-b105-7614d17f1eab-utilities\") pod \"certified-operators-gl96n\" (UID: \"8bfc0b86-2fe3-4e12-b105-7614d17f1eab\") " pod="openshift-marketplace/certified-operators-gl96n" Dec 06 07:24:19 crc kubenswrapper[4823]: I1206 07:24:19.906785 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hbff\" (UniqueName: \"kubernetes.io/projected/8bfc0b86-2fe3-4e12-b105-7614d17f1eab-kube-api-access-8hbff\") pod \"certified-operators-gl96n\" (UID: \"8bfc0b86-2fe3-4e12-b105-7614d17f1eab\") " pod="openshift-marketplace/certified-operators-gl96n" Dec 06 07:24:19 crc kubenswrapper[4823]: I1206 07:24:19.907472 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bfc0b86-2fe3-4e12-b105-7614d17f1eab-catalog-content\") pod \"certified-operators-gl96n\" (UID: \"8bfc0b86-2fe3-4e12-b105-7614d17f1eab\") " pod="openshift-marketplace/certified-operators-gl96n" Dec 06 07:24:19 crc kubenswrapper[4823]: I1206 07:24:19.907534 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bfc0b86-2fe3-4e12-b105-7614d17f1eab-utilities\") pod \"certified-operators-gl96n\" (UID: \"8bfc0b86-2fe3-4e12-b105-7614d17f1eab\") " pod="openshift-marketplace/certified-operators-gl96n" Dec 06 07:24:19 crc kubenswrapper[4823]: I1206 07:24:19.928075 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hbff\" (UniqueName: \"kubernetes.io/projected/8bfc0b86-2fe3-4e12-b105-7614d17f1eab-kube-api-access-8hbff\") pod \"certified-operators-gl96n\" (UID: \"8bfc0b86-2fe3-4e12-b105-7614d17f1eab\") " pod="openshift-marketplace/certified-operators-gl96n" Dec 06 07:24:19 crc kubenswrapper[4823]: I1206 07:24:19.997979 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gl96n" Dec 06 07:24:20 crc kubenswrapper[4823]: I1206 07:24:20.496586 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hjgf7" Dec 06 07:24:20 crc kubenswrapper[4823]: I1206 07:24:20.496993 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hjgf7" Dec 06 07:24:20 crc kubenswrapper[4823]: I1206 07:24:20.552738 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hjgf7" Dec 06 07:24:20 crc kubenswrapper[4823]: I1206 07:24:20.584058 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gl96n"] Dec 06 07:24:21 crc kubenswrapper[4823]: I1206 07:24:21.212952 4823 generic.go:334] "Generic (PLEG): container finished" podID="8bfc0b86-2fe3-4e12-b105-7614d17f1eab" containerID="0e1fbe1ddd4468d71bfc600bc9fca64ae919c22978a98016bb63c2cf58a4ca41" exitCode=0 Dec 06 07:24:21 crc kubenswrapper[4823]: I1206 07:24:21.213025 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gl96n" event={"ID":"8bfc0b86-2fe3-4e12-b105-7614d17f1eab","Type":"ContainerDied","Data":"0e1fbe1ddd4468d71bfc600bc9fca64ae919c22978a98016bb63c2cf58a4ca41"} Dec 06 07:24:21 crc kubenswrapper[4823]: I1206 07:24:21.213232 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gl96n" event={"ID":"8bfc0b86-2fe3-4e12-b105-7614d17f1eab","Type":"ContainerStarted","Data":"ef060dd7c8acc8fb4229432de2c50030e19edf435fe750fc2107ad985223fb79"} Dec 06 07:24:21 crc kubenswrapper[4823]: I1206 07:24:21.268074 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hjgf7" Dec 06 07:24:22 crc kubenswrapper[4823]: I1206 07:24:22.226093 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gl96n" event={"ID":"8bfc0b86-2fe3-4e12-b105-7614d17f1eab","Type":"ContainerStarted","Data":"da616b6c4c61f17a95418096059ae1f92ea36a5403a1e97f23f238fdcaa9e9c1"} Dec 06 07:24:22 crc kubenswrapper[4823]: I1206 07:24:22.800996 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hjgf7"] Dec 06 07:24:24 crc kubenswrapper[4823]: I1206 07:24:24.282565 4823 generic.go:334] "Generic (PLEG): container finished" podID="8bfc0b86-2fe3-4e12-b105-7614d17f1eab" containerID="da616b6c4c61f17a95418096059ae1f92ea36a5403a1e97f23f238fdcaa9e9c1" exitCode=0 Dec 06 07:24:24 crc kubenswrapper[4823]: I1206 07:24:24.282633 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gl96n" event={"ID":"8bfc0b86-2fe3-4e12-b105-7614d17f1eab","Type":"ContainerDied","Data":"da616b6c4c61f17a95418096059ae1f92ea36a5403a1e97f23f238fdcaa9e9c1"} Dec 06 07:24:24 crc kubenswrapper[4823]: I1206 07:24:24.283253 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hjgf7" podUID="d5910a23-5bdd-4425-aa9c-70824512dc52" containerName="registry-server" containerID="cri-o://4e1226e9774a429a668f9d118d833f90662fae147d48c49aaa98b981dec4bef2" gracePeriod=2 Dec 06 07:24:24 crc kubenswrapper[4823]: I1206 07:24:24.895891 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hjgf7" Dec 06 07:24:25 crc kubenswrapper[4823]: I1206 07:24:25.016957 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf5fj\" (UniqueName: \"kubernetes.io/projected/d5910a23-5bdd-4425-aa9c-70824512dc52-kube-api-access-gf5fj\") pod \"d5910a23-5bdd-4425-aa9c-70824512dc52\" (UID: \"d5910a23-5bdd-4425-aa9c-70824512dc52\") " Dec 06 07:24:25 crc kubenswrapper[4823]: I1206 07:24:25.017149 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5910a23-5bdd-4425-aa9c-70824512dc52-catalog-content\") pod \"d5910a23-5bdd-4425-aa9c-70824512dc52\" (UID: \"d5910a23-5bdd-4425-aa9c-70824512dc52\") " Dec 06 07:24:25 crc kubenswrapper[4823]: I1206 07:24:25.017196 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5910a23-5bdd-4425-aa9c-70824512dc52-utilities\") pod \"d5910a23-5bdd-4425-aa9c-70824512dc52\" (UID: \"d5910a23-5bdd-4425-aa9c-70824512dc52\") " Dec 06 07:24:25 crc kubenswrapper[4823]: I1206 07:24:25.018318 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5910a23-5bdd-4425-aa9c-70824512dc52-utilities" (OuterVolumeSpecName: "utilities") pod "d5910a23-5bdd-4425-aa9c-70824512dc52" (UID: "d5910a23-5bdd-4425-aa9c-70824512dc52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:24:25 crc kubenswrapper[4823]: I1206 07:24:25.023563 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5910a23-5bdd-4425-aa9c-70824512dc52-kube-api-access-gf5fj" (OuterVolumeSpecName: "kube-api-access-gf5fj") pod "d5910a23-5bdd-4425-aa9c-70824512dc52" (UID: "d5910a23-5bdd-4425-aa9c-70824512dc52"). InnerVolumeSpecName "kube-api-access-gf5fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:24:25 crc kubenswrapper[4823]: I1206 07:24:25.042430 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5910a23-5bdd-4425-aa9c-70824512dc52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5910a23-5bdd-4425-aa9c-70824512dc52" (UID: "d5910a23-5bdd-4425-aa9c-70824512dc52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:24:25 crc kubenswrapper[4823]: I1206 07:24:25.119541 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5910a23-5bdd-4425-aa9c-70824512dc52-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:25 crc kubenswrapper[4823]: I1206 07:24:25.119909 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5910a23-5bdd-4425-aa9c-70824512dc52-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:25 crc kubenswrapper[4823]: I1206 07:24:25.119924 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf5fj\" (UniqueName: \"kubernetes.io/projected/d5910a23-5bdd-4425-aa9c-70824512dc52-kube-api-access-gf5fj\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:25 crc kubenswrapper[4823]: I1206 07:24:25.309916 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gl96n" event={"ID":"8bfc0b86-2fe3-4e12-b105-7614d17f1eab","Type":"ContainerStarted","Data":"aaf4af541209c3eed207c4227bb2a61c76e765b0298342cb1ba0ce7121adf33f"} Dec 06 07:24:25 crc kubenswrapper[4823]: I1206 07:24:25.314916 4823 generic.go:334] "Generic (PLEG): container finished" podID="d5910a23-5bdd-4425-aa9c-70824512dc52" containerID="4e1226e9774a429a668f9d118d833f90662fae147d48c49aaa98b981dec4bef2" exitCode=0 Dec 06 07:24:25 crc kubenswrapper[4823]: I1206 07:24:25.314970 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjgf7" event={"ID":"d5910a23-5bdd-4425-aa9c-70824512dc52","Type":"ContainerDied","Data":"4e1226e9774a429a668f9d118d833f90662fae147d48c49aaa98b981dec4bef2"} Dec 06 07:24:25 crc kubenswrapper[4823]: I1206 07:24:25.315003 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjgf7" event={"ID":"d5910a23-5bdd-4425-aa9c-70824512dc52","Type":"ContainerDied","Data":"55aca1f52b0d2c6a1163f9ad265163d772623261884facc91ba38cb7555d86fc"} Dec 06 07:24:25 crc kubenswrapper[4823]: I1206 07:24:25.315023 4823 scope.go:117] "RemoveContainer" containerID="4e1226e9774a429a668f9d118d833f90662fae147d48c49aaa98b981dec4bef2" Dec 06 07:24:25 crc kubenswrapper[4823]: I1206 07:24:25.315194 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hjgf7" Dec 06 07:24:25 crc kubenswrapper[4823]: I1206 07:24:25.344278 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gl96n" podStartSLOduration=2.864251311 podStartE2EDuration="6.344264221s" podCreationTimestamp="2025-12-06 07:24:19 +0000 UTC" firstStartedPulling="2025-12-06 07:24:21.216133284 +0000 UTC m=+3562.501885244" lastFinishedPulling="2025-12-06 07:24:24.696146194 +0000 UTC m=+3565.981898154" observedRunningTime="2025-12-06 07:24:25.341523492 +0000 UTC m=+3566.627275452" watchObservedRunningTime="2025-12-06 07:24:25.344264221 +0000 UTC m=+3566.630016181" Dec 06 07:24:25 crc kubenswrapper[4823]: I1206 07:24:25.387798 4823 scope.go:117] "RemoveContainer" containerID="4b9207bb702b5e573294b6c250122c0d7e2addd472cfdfb53e21543791b0d826" Dec 06 07:24:25 crc kubenswrapper[4823]: I1206 07:24:25.458524 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hjgf7"] Dec 06 07:24:25 crc kubenswrapper[4823]: I1206 07:24:25.490864 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hjgf7"] Dec 06 07:24:25 crc kubenswrapper[4823]: I1206 07:24:25.493318 4823 scope.go:117] "RemoveContainer" containerID="fb4ee2f0878a1cfa895b190d714070b843ff9cf3fbf5a0c01a1b065dae8fa765" Dec 06 07:24:25 crc kubenswrapper[4823]: I1206 07:24:25.515522 4823 scope.go:117] "RemoveContainer" containerID="4e1226e9774a429a668f9d118d833f90662fae147d48c49aaa98b981dec4bef2" Dec 06 07:24:25 crc kubenswrapper[4823]: E1206 07:24:25.516040 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e1226e9774a429a668f9d118d833f90662fae147d48c49aaa98b981dec4bef2\": container with ID starting with 4e1226e9774a429a668f9d118d833f90662fae147d48c49aaa98b981dec4bef2 not found: ID does not exist" containerID="4e1226e9774a429a668f9d118d833f90662fae147d48c49aaa98b981dec4bef2" Dec 06 07:24:25 crc kubenswrapper[4823]: I1206 07:24:25.516089 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e1226e9774a429a668f9d118d833f90662fae147d48c49aaa98b981dec4bef2"} err="failed to get container status \"4e1226e9774a429a668f9d118d833f90662fae147d48c49aaa98b981dec4bef2\": rpc error: code = NotFound desc = could not find container \"4e1226e9774a429a668f9d118d833f90662fae147d48c49aaa98b981dec4bef2\": container with ID starting with 4e1226e9774a429a668f9d118d833f90662fae147d48c49aaa98b981dec4bef2 not found: ID does not exist" Dec 06 07:24:25 crc kubenswrapper[4823]: I1206 07:24:25.516119 4823 scope.go:117] "RemoveContainer" containerID="4b9207bb702b5e573294b6c250122c0d7e2addd472cfdfb53e21543791b0d826" Dec 06 07:24:25 crc kubenswrapper[4823]: E1206 07:24:25.516596 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b9207bb702b5e573294b6c250122c0d7e2addd472cfdfb53e21543791b0d826\": container with ID starting with 4b9207bb702b5e573294b6c250122c0d7e2addd472cfdfb53e21543791b0d826 not found: ID does not exist" containerID="4b9207bb702b5e573294b6c250122c0d7e2addd472cfdfb53e21543791b0d826" Dec 06 07:24:25 crc kubenswrapper[4823]: I1206 07:24:25.516618 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b9207bb702b5e573294b6c250122c0d7e2addd472cfdfb53e21543791b0d826"} err="failed to get container status \"4b9207bb702b5e573294b6c250122c0d7e2addd472cfdfb53e21543791b0d826\": rpc error: code = NotFound desc = could not find container \"4b9207bb702b5e573294b6c250122c0d7e2addd472cfdfb53e21543791b0d826\": container with ID starting with 4b9207bb702b5e573294b6c250122c0d7e2addd472cfdfb53e21543791b0d826 not found: ID does not exist" Dec 06 07:24:25 crc kubenswrapper[4823]: I1206 07:24:25.516630 4823 scope.go:117] "RemoveContainer" containerID="fb4ee2f0878a1cfa895b190d714070b843ff9cf3fbf5a0c01a1b065dae8fa765" Dec 06 07:24:25 crc kubenswrapper[4823]: E1206 07:24:25.517412 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb4ee2f0878a1cfa895b190d714070b843ff9cf3fbf5a0c01a1b065dae8fa765\": container with ID starting with fb4ee2f0878a1cfa895b190d714070b843ff9cf3fbf5a0c01a1b065dae8fa765 not found: ID does not exist" containerID="fb4ee2f0878a1cfa895b190d714070b843ff9cf3fbf5a0c01a1b065dae8fa765" Dec 06 07:24:25 crc kubenswrapper[4823]: I1206 07:24:25.517444 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb4ee2f0878a1cfa895b190d714070b843ff9cf3fbf5a0c01a1b065dae8fa765"} err="failed to get container status \"fb4ee2f0878a1cfa895b190d714070b843ff9cf3fbf5a0c01a1b065dae8fa765\": rpc error: code = NotFound desc = could not find container \"fb4ee2f0878a1cfa895b190d714070b843ff9cf3fbf5a0c01a1b065dae8fa765\": container with ID starting with fb4ee2f0878a1cfa895b190d714070b843ff9cf3fbf5a0c01a1b065dae8fa765 not found: ID does not exist" Dec 06 07:24:27 crc kubenswrapper[4823]: I1206 07:24:27.151588 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5910a23-5bdd-4425-aa9c-70824512dc52" path="/var/lib/kubelet/pods/d5910a23-5bdd-4425-aa9c-70824512dc52/volumes" Dec 06 07:24:29 crc kubenswrapper[4823]: I1206 07:24:29.999038 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gl96n" Dec 06 07:24:29 crc kubenswrapper[4823]: I1206 07:24:29.999456 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gl96n" Dec 06 07:24:30 crc kubenswrapper[4823]: I1206 07:24:30.047642 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gl96n" Dec 06 07:24:30 crc kubenswrapper[4823]: I1206 07:24:30.431079 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gl96n" Dec 06 07:24:30 crc kubenswrapper[4823]: I1206 07:24:30.618420 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8xntc"] Dec 06 07:24:30 crc kubenswrapper[4823]: E1206 07:24:30.618926 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5910a23-5bdd-4425-aa9c-70824512dc52" containerName="extract-utilities" Dec 06 07:24:30 crc kubenswrapper[4823]: I1206 07:24:30.618939 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5910a23-5bdd-4425-aa9c-70824512dc52" containerName="extract-utilities" Dec 06 07:24:30 crc kubenswrapper[4823]: E1206 07:24:30.618971 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5910a23-5bdd-4425-aa9c-70824512dc52" containerName="extract-content" Dec 06 07:24:30 crc kubenswrapper[4823]: I1206 07:24:30.618979 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5910a23-5bdd-4425-aa9c-70824512dc52" containerName="extract-content" Dec 06 07:24:30 crc kubenswrapper[4823]: E1206 07:24:30.618985 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5910a23-5bdd-4425-aa9c-70824512dc52" containerName="registry-server" Dec 06 07:24:30 crc kubenswrapper[4823]: I1206 07:24:30.618992 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5910a23-5bdd-4425-aa9c-70824512dc52" containerName="registry-server" Dec 06 07:24:30 crc kubenswrapper[4823]: I1206 07:24:30.619186 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5910a23-5bdd-4425-aa9c-70824512dc52" containerName="registry-server" Dec 06 07:24:30 crc kubenswrapper[4823]: I1206 07:24:30.620681 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xntc" Dec 06 07:24:30 crc kubenswrapper[4823]: I1206 07:24:30.651049 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8xntc"] Dec 06 07:24:30 crc kubenswrapper[4823]: I1206 07:24:30.702505 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db57e68-6479-4520-8b5a-ac125c403cfa-catalog-content\") pod \"community-operators-8xntc\" (UID: \"6db57e68-6479-4520-8b5a-ac125c403cfa\") " pod="openshift-marketplace/community-operators-8xntc" Dec 06 07:24:30 crc kubenswrapper[4823]: I1206 07:24:30.702567 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db57e68-6479-4520-8b5a-ac125c403cfa-utilities\") pod \"community-operators-8xntc\" (UID: \"6db57e68-6479-4520-8b5a-ac125c403cfa\") " pod="openshift-marketplace/community-operators-8xntc" Dec 06 07:24:30 crc kubenswrapper[4823]: I1206 07:24:30.702622 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s2r2\" (UniqueName: \"kubernetes.io/projected/6db57e68-6479-4520-8b5a-ac125c403cfa-kube-api-access-8s2r2\") pod \"community-operators-8xntc\" (UID: \"6db57e68-6479-4520-8b5a-ac125c403cfa\") " pod="openshift-marketplace/community-operators-8xntc" Dec 06 07:24:30 crc kubenswrapper[4823]: I1206 07:24:30.804453 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db57e68-6479-4520-8b5a-ac125c403cfa-catalog-content\") pod \"community-operators-8xntc\" (UID: \"6db57e68-6479-4520-8b5a-ac125c403cfa\") " pod="openshift-marketplace/community-operators-8xntc" Dec 06 07:24:30 crc kubenswrapper[4823]: I1206 07:24:30.804533 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db57e68-6479-4520-8b5a-ac125c403cfa-utilities\") pod \"community-operators-8xntc\" (UID: \"6db57e68-6479-4520-8b5a-ac125c403cfa\") " pod="openshift-marketplace/community-operators-8xntc" Dec 06 07:24:30 crc kubenswrapper[4823]: I1206 07:24:30.804610 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s2r2\" (UniqueName: \"kubernetes.io/projected/6db57e68-6479-4520-8b5a-ac125c403cfa-kube-api-access-8s2r2\") pod \"community-operators-8xntc\" (UID: \"6db57e68-6479-4520-8b5a-ac125c403cfa\") " pod="openshift-marketplace/community-operators-8xntc" Dec 06 07:24:30 crc kubenswrapper[4823]: I1206 07:24:30.804986 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db57e68-6479-4520-8b5a-ac125c403cfa-catalog-content\") pod \"community-operators-8xntc\" (UID: \"6db57e68-6479-4520-8b5a-ac125c403cfa\") " pod="openshift-marketplace/community-operators-8xntc" Dec 06 07:24:30 crc kubenswrapper[4823]: I1206 07:24:30.805067 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db57e68-6479-4520-8b5a-ac125c403cfa-utilities\") pod \"community-operators-8xntc\" (UID: \"6db57e68-6479-4520-8b5a-ac125c403cfa\") " pod="openshift-marketplace/community-operators-8xntc" Dec 06 07:24:30 crc kubenswrapper[4823]: I1206 07:24:30.824019 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s2r2\" (UniqueName: \"kubernetes.io/projected/6db57e68-6479-4520-8b5a-ac125c403cfa-kube-api-access-8s2r2\") pod \"community-operators-8xntc\" (UID: \"6db57e68-6479-4520-8b5a-ac125c403cfa\") " pod="openshift-marketplace/community-operators-8xntc" Dec 06 07:24:30 crc kubenswrapper[4823]: I1206 07:24:30.969074 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xntc" Dec 06 07:24:31 crc kubenswrapper[4823]: I1206 07:24:31.624649 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8xntc"] Dec 06 07:24:32 crc kubenswrapper[4823]: I1206 07:24:32.398960 4823 generic.go:334] "Generic (PLEG): container finished" podID="6db57e68-6479-4520-8b5a-ac125c403cfa" containerID="31d7dd51fb0cbbf4a71f234a2fe0c68b4e2a457f286a48d350ff715c43aa5656" exitCode=0 Dec 06 07:24:32 crc kubenswrapper[4823]: I1206 07:24:32.399338 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xntc" event={"ID":"6db57e68-6479-4520-8b5a-ac125c403cfa","Type":"ContainerDied","Data":"31d7dd51fb0cbbf4a71f234a2fe0c68b4e2a457f286a48d350ff715c43aa5656"} Dec 06 07:24:32 crc kubenswrapper[4823]: I1206 07:24:32.399383 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xntc" event={"ID":"6db57e68-6479-4520-8b5a-ac125c403cfa","Type":"ContainerStarted","Data":"9565aa60e3d1518932e29741693d96b6283a812702a58b174ace02df85501049"} Dec 06 07:24:32 crc kubenswrapper[4823]: I1206 07:24:32.402607 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gl96n"] Dec 06 07:24:33 crc kubenswrapper[4823]: I1206 07:24:33.447139 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gl96n" podUID="8bfc0b86-2fe3-4e12-b105-7614d17f1eab" containerName="registry-server" containerID="cri-o://aaf4af541209c3eed207c4227bb2a61c76e765b0298342cb1ba0ce7121adf33f" gracePeriod=2 Dec 06 07:24:33 crc kubenswrapper[4823]: I1206 07:24:33.448004 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xntc" event={"ID":"6db57e68-6479-4520-8b5a-ac125c403cfa","Type":"ContainerStarted","Data":"5a4dac549ff6853123d6ca9989ce15b76554f51ed6fca264a66ebabf5d326acd"} Dec 06 07:24:33 crc kubenswrapper[4823]: I1206 07:24:33.997284 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gl96n" Dec 06 07:24:34 crc kubenswrapper[4823]: I1206 07:24:34.078871 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bfc0b86-2fe3-4e12-b105-7614d17f1eab-catalog-content\") pod \"8bfc0b86-2fe3-4e12-b105-7614d17f1eab\" (UID: \"8bfc0b86-2fe3-4e12-b105-7614d17f1eab\") " Dec 06 07:24:34 crc kubenswrapper[4823]: I1206 07:24:34.078934 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bfc0b86-2fe3-4e12-b105-7614d17f1eab-utilities\") pod \"8bfc0b86-2fe3-4e12-b105-7614d17f1eab\" (UID: \"8bfc0b86-2fe3-4e12-b105-7614d17f1eab\") " Dec 06 07:24:34 crc kubenswrapper[4823]: I1206 07:24:34.079056 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hbff\" (UniqueName: \"kubernetes.io/projected/8bfc0b86-2fe3-4e12-b105-7614d17f1eab-kube-api-access-8hbff\") pod \"8bfc0b86-2fe3-4e12-b105-7614d17f1eab\" (UID: \"8bfc0b86-2fe3-4e12-b105-7614d17f1eab\") " Dec 06 07:24:34 crc kubenswrapper[4823]: I1206 07:24:34.080157 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bfc0b86-2fe3-4e12-b105-7614d17f1eab-utilities" (OuterVolumeSpecName: "utilities") pod "8bfc0b86-2fe3-4e12-b105-7614d17f1eab" (UID: "8bfc0b86-2fe3-4e12-b105-7614d17f1eab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:24:34 crc kubenswrapper[4823]: I1206 07:24:34.092626 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bfc0b86-2fe3-4e12-b105-7614d17f1eab-kube-api-access-8hbff" (OuterVolumeSpecName: "kube-api-access-8hbff") pod "8bfc0b86-2fe3-4e12-b105-7614d17f1eab" (UID: "8bfc0b86-2fe3-4e12-b105-7614d17f1eab"). InnerVolumeSpecName "kube-api-access-8hbff". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:24:34 crc kubenswrapper[4823]: I1206 07:24:34.182329 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bfc0b86-2fe3-4e12-b105-7614d17f1eab-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:34 crc kubenswrapper[4823]: I1206 07:24:34.182361 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hbff\" (UniqueName: \"kubernetes.io/projected/8bfc0b86-2fe3-4e12-b105-7614d17f1eab-kube-api-access-8hbff\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:34 crc kubenswrapper[4823]: I1206 07:24:34.457899 4823 generic.go:334] "Generic (PLEG): container finished" podID="8bfc0b86-2fe3-4e12-b105-7614d17f1eab" containerID="aaf4af541209c3eed207c4227bb2a61c76e765b0298342cb1ba0ce7121adf33f" exitCode=0 Dec 06 07:24:34 crc kubenswrapper[4823]: I1206 07:24:34.457972 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gl96n" Dec 06 07:24:34 crc kubenswrapper[4823]: I1206 07:24:34.457969 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gl96n" event={"ID":"8bfc0b86-2fe3-4e12-b105-7614d17f1eab","Type":"ContainerDied","Data":"aaf4af541209c3eed207c4227bb2a61c76e765b0298342cb1ba0ce7121adf33f"} Dec 06 07:24:34 crc kubenswrapper[4823]: I1206 07:24:34.458075 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gl96n" event={"ID":"8bfc0b86-2fe3-4e12-b105-7614d17f1eab","Type":"ContainerDied","Data":"ef060dd7c8acc8fb4229432de2c50030e19edf435fe750fc2107ad985223fb79"} Dec 06 07:24:34 crc kubenswrapper[4823]: I1206 07:24:34.458092 4823 scope.go:117] "RemoveContainer" containerID="aaf4af541209c3eed207c4227bb2a61c76e765b0298342cb1ba0ce7121adf33f" Dec 06 07:24:34 crc kubenswrapper[4823]: I1206 07:24:34.482883 4823 scope.go:117] "RemoveContainer" containerID="da616b6c4c61f17a95418096059ae1f92ea36a5403a1e97f23f238fdcaa9e9c1" Dec 06 07:24:34 crc kubenswrapper[4823]: I1206 07:24:34.513530 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bfc0b86-2fe3-4e12-b105-7614d17f1eab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bfc0b86-2fe3-4e12-b105-7614d17f1eab" (UID: "8bfc0b86-2fe3-4e12-b105-7614d17f1eab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:24:34 crc kubenswrapper[4823]: I1206 07:24:34.523812 4823 scope.go:117] "RemoveContainer" containerID="0e1fbe1ddd4468d71bfc600bc9fca64ae919c22978a98016bb63c2cf58a4ca41" Dec 06 07:24:34 crc kubenswrapper[4823]: I1206 07:24:34.567343 4823 scope.go:117] "RemoveContainer" containerID="aaf4af541209c3eed207c4227bb2a61c76e765b0298342cb1ba0ce7121adf33f" Dec 06 07:24:34 crc kubenswrapper[4823]: E1206 07:24:34.567762 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaf4af541209c3eed207c4227bb2a61c76e765b0298342cb1ba0ce7121adf33f\": container with ID starting with aaf4af541209c3eed207c4227bb2a61c76e765b0298342cb1ba0ce7121adf33f not found: ID does not exist" containerID="aaf4af541209c3eed207c4227bb2a61c76e765b0298342cb1ba0ce7121adf33f" Dec 06 07:24:34 crc kubenswrapper[4823]: I1206 07:24:34.567804 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaf4af541209c3eed207c4227bb2a61c76e765b0298342cb1ba0ce7121adf33f"} err="failed to get container status \"aaf4af541209c3eed207c4227bb2a61c76e765b0298342cb1ba0ce7121adf33f\": rpc error: code = NotFound desc = could not find container \"aaf4af541209c3eed207c4227bb2a61c76e765b0298342cb1ba0ce7121adf33f\": container with ID starting with aaf4af541209c3eed207c4227bb2a61c76e765b0298342cb1ba0ce7121adf33f not found: ID does not exist" Dec 06 07:24:34 crc kubenswrapper[4823]: I1206 07:24:34.567835 4823 scope.go:117] "RemoveContainer" containerID="da616b6c4c61f17a95418096059ae1f92ea36a5403a1e97f23f238fdcaa9e9c1" Dec 06 07:24:34 crc kubenswrapper[4823]: E1206 07:24:34.568106 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da616b6c4c61f17a95418096059ae1f92ea36a5403a1e97f23f238fdcaa9e9c1\": container with ID starting with da616b6c4c61f17a95418096059ae1f92ea36a5403a1e97f23f238fdcaa9e9c1 not found: ID does not exist" containerID="da616b6c4c61f17a95418096059ae1f92ea36a5403a1e97f23f238fdcaa9e9c1" Dec 06 07:24:34 crc kubenswrapper[4823]: I1206 07:24:34.568144 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da616b6c4c61f17a95418096059ae1f92ea36a5403a1e97f23f238fdcaa9e9c1"} err="failed to get container status \"da616b6c4c61f17a95418096059ae1f92ea36a5403a1e97f23f238fdcaa9e9c1\": rpc error: code = NotFound desc = could not find container \"da616b6c4c61f17a95418096059ae1f92ea36a5403a1e97f23f238fdcaa9e9c1\": container with ID starting with da616b6c4c61f17a95418096059ae1f92ea36a5403a1e97f23f238fdcaa9e9c1 not found: ID does not exist" Dec 06 07:24:34 crc kubenswrapper[4823]: I1206 07:24:34.568165 4823 scope.go:117] "RemoveContainer" containerID="0e1fbe1ddd4468d71bfc600bc9fca64ae919c22978a98016bb63c2cf58a4ca41" Dec 06 07:24:34 crc kubenswrapper[4823]: E1206 07:24:34.568440 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e1fbe1ddd4468d71bfc600bc9fca64ae919c22978a98016bb63c2cf58a4ca41\": container with ID starting with 0e1fbe1ddd4468d71bfc600bc9fca64ae919c22978a98016bb63c2cf58a4ca41 not found: ID does not exist" containerID="0e1fbe1ddd4468d71bfc600bc9fca64ae919c22978a98016bb63c2cf58a4ca41" Dec 06 07:24:34 crc kubenswrapper[4823]: I1206 07:24:34.568511 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e1fbe1ddd4468d71bfc600bc9fca64ae919c22978a98016bb63c2cf58a4ca41"} err="failed to get container status \"0e1fbe1ddd4468d71bfc600bc9fca64ae919c22978a98016bb63c2cf58a4ca41\": rpc error: code = NotFound desc = could not find container \"0e1fbe1ddd4468d71bfc600bc9fca64ae919c22978a98016bb63c2cf58a4ca41\": container with ID starting with 0e1fbe1ddd4468d71bfc600bc9fca64ae919c22978a98016bb63c2cf58a4ca41 not found: ID does not exist" Dec 06 07:24:34 crc kubenswrapper[4823]: I1206 07:24:34.590493 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bfc0b86-2fe3-4e12-b105-7614d17f1eab-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:34 crc kubenswrapper[4823]: I1206 07:24:34.796689 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gl96n"] Dec 06 07:24:34 crc kubenswrapper[4823]: I1206 07:24:34.806006 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gl96n"] Dec 06 07:24:35 crc kubenswrapper[4823]: I1206 07:24:35.153714 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bfc0b86-2fe3-4e12-b105-7614d17f1eab" path="/var/lib/kubelet/pods/8bfc0b86-2fe3-4e12-b105-7614d17f1eab/volumes" Dec 06 07:24:35 crc kubenswrapper[4823]: I1206 07:24:35.491275 4823 generic.go:334] "Generic (PLEG): container finished" podID="6db57e68-6479-4520-8b5a-ac125c403cfa" containerID="5a4dac549ff6853123d6ca9989ce15b76554f51ed6fca264a66ebabf5d326acd" exitCode=0 Dec 06 07:24:35 crc kubenswrapper[4823]: I1206 07:24:35.491351 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xntc" event={"ID":"6db57e68-6479-4520-8b5a-ac125c403cfa","Type":"ContainerDied","Data":"5a4dac549ff6853123d6ca9989ce15b76554f51ed6fca264a66ebabf5d326acd"} Dec 06 07:24:36 crc kubenswrapper[4823]: I1206 07:24:36.504221 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xntc" event={"ID":"6db57e68-6479-4520-8b5a-ac125c403cfa","Type":"ContainerStarted","Data":"dc77cdad4063ae5dde627bfa2115967450d73dfa39c362203ca45734ea526954"} Dec 06 07:24:36 crc kubenswrapper[4823]: I1206 07:24:36.660680 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8xntc" podStartSLOduration=3.158524863 podStartE2EDuration="6.660644231s" podCreationTimestamp="2025-12-06 07:24:30 +0000 UTC" firstStartedPulling="2025-12-06 07:24:32.40065467 +0000 UTC m=+3573.686406620" lastFinishedPulling="2025-12-06 07:24:35.902774028 +0000 UTC m=+3577.188525988" observedRunningTime="2025-12-06 07:24:36.658180809 +0000 UTC m=+3577.943932759" watchObservedRunningTime="2025-12-06 07:24:36.660644231 +0000 UTC m=+3577.946396291" Dec 06 07:24:40 crc kubenswrapper[4823]: I1206 07:24:40.969812 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8xntc" Dec 06 07:24:40 crc kubenswrapper[4823]: I1206 07:24:40.970345 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8xntc" Dec 06 07:24:41 crc kubenswrapper[4823]: I1206 07:24:41.086018 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8xntc" Dec 06 07:24:41 crc kubenswrapper[4823]: I1206 07:24:41.709520 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8xntc" Dec 06 07:24:42 crc kubenswrapper[4823]: I1206 07:24:42.798567 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8xntc"] Dec 06 07:24:43 crc kubenswrapper[4823]: I1206 07:24:43.684822 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8xntc" podUID="6db57e68-6479-4520-8b5a-ac125c403cfa" containerName="registry-server" containerID="cri-o://dc77cdad4063ae5dde627bfa2115967450d73dfa39c362203ca45734ea526954" gracePeriod=2 Dec 06 07:24:44 crc kubenswrapper[4823]: I1206 07:24:44.335724 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xntc" Dec 06 07:24:44 crc kubenswrapper[4823]: I1206 07:24:44.499941 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s2r2\" (UniqueName: \"kubernetes.io/projected/6db57e68-6479-4520-8b5a-ac125c403cfa-kube-api-access-8s2r2\") pod \"6db57e68-6479-4520-8b5a-ac125c403cfa\" (UID: \"6db57e68-6479-4520-8b5a-ac125c403cfa\") " Dec 06 07:24:44 crc kubenswrapper[4823]: I1206 07:24:44.500122 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db57e68-6479-4520-8b5a-ac125c403cfa-catalog-content\") pod \"6db57e68-6479-4520-8b5a-ac125c403cfa\" (UID: \"6db57e68-6479-4520-8b5a-ac125c403cfa\") " Dec 06 07:24:44 crc kubenswrapper[4823]: I1206 07:24:44.500222 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db57e68-6479-4520-8b5a-ac125c403cfa-utilities\") pod \"6db57e68-6479-4520-8b5a-ac125c403cfa\" (UID: \"6db57e68-6479-4520-8b5a-ac125c403cfa\") " Dec 06 07:24:44 crc kubenswrapper[4823]: I1206 07:24:44.501564 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6db57e68-6479-4520-8b5a-ac125c403cfa-utilities" (OuterVolumeSpecName: "utilities") pod "6db57e68-6479-4520-8b5a-ac125c403cfa" (UID: "6db57e68-6479-4520-8b5a-ac125c403cfa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:24:44 crc kubenswrapper[4823]: I1206 07:24:44.507539 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6db57e68-6479-4520-8b5a-ac125c403cfa-kube-api-access-8s2r2" (OuterVolumeSpecName: "kube-api-access-8s2r2") pod "6db57e68-6479-4520-8b5a-ac125c403cfa" (UID: "6db57e68-6479-4520-8b5a-ac125c403cfa"). InnerVolumeSpecName "kube-api-access-8s2r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:24:44 crc kubenswrapper[4823]: I1206 07:24:44.561551 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6db57e68-6479-4520-8b5a-ac125c403cfa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6db57e68-6479-4520-8b5a-ac125c403cfa" (UID: "6db57e68-6479-4520-8b5a-ac125c403cfa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:24:44 crc kubenswrapper[4823]: I1206 07:24:44.602879 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db57e68-6479-4520-8b5a-ac125c403cfa-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:44 crc kubenswrapper[4823]: I1206 07:24:44.602921 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s2r2\" (UniqueName: \"kubernetes.io/projected/6db57e68-6479-4520-8b5a-ac125c403cfa-kube-api-access-8s2r2\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:44 crc kubenswrapper[4823]: I1206 07:24:44.602936 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db57e68-6479-4520-8b5a-ac125c403cfa-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:44 crc kubenswrapper[4823]: I1206 07:24:44.695483 4823 generic.go:334] "Generic (PLEG): container finished" podID="6db57e68-6479-4520-8b5a-ac125c403cfa" containerID="dc77cdad4063ae5dde627bfa2115967450d73dfa39c362203ca45734ea526954" exitCode=0 Dec 06 07:24:44 crc kubenswrapper[4823]: I1206 07:24:44.695551 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xntc" event={"ID":"6db57e68-6479-4520-8b5a-ac125c403cfa","Type":"ContainerDied","Data":"dc77cdad4063ae5dde627bfa2115967450d73dfa39c362203ca45734ea526954"} Dec 06 07:24:44 crc kubenswrapper[4823]: I1206 07:24:44.695570 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xntc" Dec 06 07:24:44 crc kubenswrapper[4823]: I1206 07:24:44.695591 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xntc" event={"ID":"6db57e68-6479-4520-8b5a-ac125c403cfa","Type":"ContainerDied","Data":"9565aa60e3d1518932e29741693d96b6283a812702a58b174ace02df85501049"} Dec 06 07:24:44 crc kubenswrapper[4823]: I1206 07:24:44.695620 4823 scope.go:117] "RemoveContainer" containerID="dc77cdad4063ae5dde627bfa2115967450d73dfa39c362203ca45734ea526954" Dec 06 07:24:44 crc kubenswrapper[4823]: I1206 07:24:44.715332 4823 scope.go:117] "RemoveContainer" containerID="5a4dac549ff6853123d6ca9989ce15b76554f51ed6fca264a66ebabf5d326acd" Dec 06 07:24:44 crc kubenswrapper[4823]: I1206 07:24:44.729948 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8xntc"] Dec 06 07:24:44 crc kubenswrapper[4823]: I1206 07:24:44.750616 4823 scope.go:117] "RemoveContainer" containerID="31d7dd51fb0cbbf4a71f234a2fe0c68b4e2a457f286a48d350ff715c43aa5656" Dec 06 07:24:44 crc kubenswrapper[4823]: I1206 07:24:44.751951 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8xntc"] Dec 06 07:24:44 crc kubenswrapper[4823]: I1206 07:24:44.789780 4823 scope.go:117] "RemoveContainer" containerID="dc77cdad4063ae5dde627bfa2115967450d73dfa39c362203ca45734ea526954" Dec 06 07:24:44 crc kubenswrapper[4823]: E1206 07:24:44.790303 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc77cdad4063ae5dde627bfa2115967450d73dfa39c362203ca45734ea526954\": container with ID starting with dc77cdad4063ae5dde627bfa2115967450d73dfa39c362203ca45734ea526954 not found: ID does not exist" containerID="dc77cdad4063ae5dde627bfa2115967450d73dfa39c362203ca45734ea526954" Dec 06 07:24:44 crc kubenswrapper[4823]: I1206 07:24:44.790361 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc77cdad4063ae5dde627bfa2115967450d73dfa39c362203ca45734ea526954"} err="failed to get container status \"dc77cdad4063ae5dde627bfa2115967450d73dfa39c362203ca45734ea526954\": rpc error: code = NotFound desc = could not find container \"dc77cdad4063ae5dde627bfa2115967450d73dfa39c362203ca45734ea526954\": container with ID starting with dc77cdad4063ae5dde627bfa2115967450d73dfa39c362203ca45734ea526954 not found: ID does not exist" Dec 06 07:24:44 crc kubenswrapper[4823]: I1206 07:24:44.790397 4823 scope.go:117] "RemoveContainer" containerID="5a4dac549ff6853123d6ca9989ce15b76554f51ed6fca264a66ebabf5d326acd" Dec 06 07:24:44 crc kubenswrapper[4823]: E1206 07:24:44.790849 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a4dac549ff6853123d6ca9989ce15b76554f51ed6fca264a66ebabf5d326acd\": container with ID starting with 5a4dac549ff6853123d6ca9989ce15b76554f51ed6fca264a66ebabf5d326acd not found: ID does not exist" containerID="5a4dac549ff6853123d6ca9989ce15b76554f51ed6fca264a66ebabf5d326acd" Dec 06 07:24:44 crc kubenswrapper[4823]: I1206 07:24:44.790886 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a4dac549ff6853123d6ca9989ce15b76554f51ed6fca264a66ebabf5d326acd"} err="failed to get container status \"5a4dac549ff6853123d6ca9989ce15b76554f51ed6fca264a66ebabf5d326acd\": rpc error: code = NotFound desc = could not find container \"5a4dac549ff6853123d6ca9989ce15b76554f51ed6fca264a66ebabf5d326acd\": container with ID starting with 5a4dac549ff6853123d6ca9989ce15b76554f51ed6fca264a66ebabf5d326acd not found: ID does not exist" Dec 06 07:24:44 crc kubenswrapper[4823]: I1206 07:24:44.790918 4823 scope.go:117] "RemoveContainer" containerID="31d7dd51fb0cbbf4a71f234a2fe0c68b4e2a457f286a48d350ff715c43aa5656" Dec 06 07:24:44 crc kubenswrapper[4823]: E1206 07:24:44.791146 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31d7dd51fb0cbbf4a71f234a2fe0c68b4e2a457f286a48d350ff715c43aa5656\": container with ID starting with 31d7dd51fb0cbbf4a71f234a2fe0c68b4e2a457f286a48d350ff715c43aa5656 not found: ID does not exist" containerID="31d7dd51fb0cbbf4a71f234a2fe0c68b4e2a457f286a48d350ff715c43aa5656" Dec 06 07:24:44 crc kubenswrapper[4823]: I1206 07:24:44.791182 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31d7dd51fb0cbbf4a71f234a2fe0c68b4e2a457f286a48d350ff715c43aa5656"} err="failed to get container status \"31d7dd51fb0cbbf4a71f234a2fe0c68b4e2a457f286a48d350ff715c43aa5656\": rpc error: code = NotFound desc = could not find container \"31d7dd51fb0cbbf4a71f234a2fe0c68b4e2a457f286a48d350ff715c43aa5656\": container with ID starting with 31d7dd51fb0cbbf4a71f234a2fe0c68b4e2a457f286a48d350ff715c43aa5656 not found: ID does not exist" Dec 06 07:24:45 crc kubenswrapper[4823]: I1206 07:24:45.153807 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6db57e68-6479-4520-8b5a-ac125c403cfa" path="/var/lib/kubelet/pods/6db57e68-6479-4520-8b5a-ac125c403cfa/volumes" Dec 06 07:25:06 crc kubenswrapper[4823]: I1206 07:25:06.051858 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:25:06 crc kubenswrapper[4823]: I1206 07:25:06.052414 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:25:36 crc kubenswrapper[4823]: I1206 07:25:36.052228 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:25:36 crc kubenswrapper[4823]: I1206 07:25:36.052757 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:26:06 crc kubenswrapper[4823]: I1206 07:26:06.051528 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:26:06 crc kubenswrapper[4823]: I1206 07:26:06.052142 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:26:06 crc kubenswrapper[4823]: I1206 07:26:06.052199 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 07:26:06 crc kubenswrapper[4823]: I1206 07:26:06.053105 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62"} pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:26:06 crc kubenswrapper[4823]: I1206 07:26:06.053174 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" containerID="cri-o://ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" gracePeriod=600 Dec 06 07:26:06 crc kubenswrapper[4823]: E1206 07:26:06.179563 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:26:06 crc kubenswrapper[4823]: I1206 07:26:06.784268 4823 generic.go:334] "Generic (PLEG): container finished" podID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerID="ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" exitCode=0 Dec 06 07:26:06 crc kubenswrapper[4823]: I1206 07:26:06.784346 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerDied","Data":"ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62"} Dec 06 07:26:06 crc kubenswrapper[4823]: I1206 07:26:06.784607 4823 scope.go:117] "RemoveContainer" containerID="586a74a448f82acac41b8e54bb568e0eb3040601caa978dce2662a8d7af685c7" Dec 06 07:26:06 crc kubenswrapper[4823]: I1206 07:26:06.785381 4823 scope.go:117] "RemoveContainer" containerID="ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" Dec 06 07:26:06 crc kubenswrapper[4823]: E1206 07:26:06.785910 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:26:19 crc kubenswrapper[4823]: I1206 07:26:19.150041 4823 scope.go:117] "RemoveContainer" containerID="ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" Dec 06 07:26:19 crc kubenswrapper[4823]: E1206 07:26:19.150961 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:26:33 crc kubenswrapper[4823]: I1206 07:26:33.141372 4823 scope.go:117] "RemoveContainer" containerID="ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" Dec 06 07:26:33 crc kubenswrapper[4823]: E1206 07:26:33.143478 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:26:45 crc kubenswrapper[4823]: I1206 07:26:45.142159 4823 scope.go:117] "RemoveContainer" containerID="ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" Dec 06 07:26:45 crc kubenswrapper[4823]: E1206 07:26:45.142968 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:26:59 crc kubenswrapper[4823]: I1206 07:26:59.152873 4823 scope.go:117] "RemoveContainer" containerID="ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" Dec 06 07:26:59 crc kubenswrapper[4823]: E1206 07:26:59.153870 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:27:13 crc kubenswrapper[4823]: I1206 07:27:13.141628 4823 scope.go:117] "RemoveContainer" containerID="ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" Dec 06 07:27:13 crc kubenswrapper[4823]: E1206 07:27:13.142510 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:27:27 crc kubenswrapper[4823]: I1206 07:27:27.140773 4823 scope.go:117] "RemoveContainer" containerID="ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" Dec 06 07:27:27 crc kubenswrapper[4823]: E1206 07:27:27.141482 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:27:41 crc kubenswrapper[4823]: I1206 07:27:41.140598 4823 scope.go:117] "RemoveContainer" containerID="ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" Dec 06 07:27:41 crc kubenswrapper[4823]: E1206 07:27:41.142491 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:27:55 crc kubenswrapper[4823]: I1206 07:27:55.140749 4823 scope.go:117] "RemoveContainer" containerID="ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" Dec 06 07:27:55 crc kubenswrapper[4823]: E1206 07:27:55.142583 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:28:10 crc kubenswrapper[4823]: I1206 07:28:10.141481 4823 scope.go:117] "RemoveContainer" containerID="ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" Dec 06 07:28:10 crc kubenswrapper[4823]: E1206 07:28:10.143416 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:28:22 crc kubenswrapper[4823]: I1206 07:28:22.141270 4823 scope.go:117] "RemoveContainer" containerID="ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" Dec 06 07:28:22 crc kubenswrapper[4823]: E1206 07:28:22.142122 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:28:37 crc kubenswrapper[4823]: I1206 07:28:37.141789 4823 scope.go:117] "RemoveContainer" containerID="ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" Dec 06 07:28:37 crc kubenswrapper[4823]: E1206 07:28:37.143563 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:28:49 crc kubenswrapper[4823]: I1206 07:28:49.151366 4823 scope.go:117] "RemoveContainer" containerID="ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" Dec 06 07:28:49 crc kubenswrapper[4823]: E1206 07:28:49.152261 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:29:04 crc kubenswrapper[4823]: I1206 07:29:04.142179 4823 scope.go:117] "RemoveContainer" containerID="ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" Dec 06 07:29:04 crc kubenswrapper[4823]: E1206 07:29:04.143794 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:29:16 crc kubenswrapper[4823]: I1206 07:29:16.140789 4823 scope.go:117] "RemoveContainer" containerID="ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" Dec 06 07:29:16 crc kubenswrapper[4823]: E1206 07:29:16.141730 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:29:28 crc kubenswrapper[4823]: I1206 07:29:28.140736 4823 scope.go:117] "RemoveContainer" containerID="ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" Dec 06 07:29:28 crc kubenswrapper[4823]: E1206 07:29:28.142840 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:29:41 crc kubenswrapper[4823]: I1206 07:29:41.140691 4823 scope.go:117] "RemoveContainer" containerID="ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" Dec 06 07:29:41 crc kubenswrapper[4823]: E1206 07:29:41.141512 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:29:53 crc kubenswrapper[4823]: I1206 07:29:53.141052 4823 scope.go:117] "RemoveContainer" containerID="ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" Dec 06 07:29:53 crc kubenswrapper[4823]: E1206 07:29:53.141892 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:30:00 crc kubenswrapper[4823]: I1206 07:30:00.181841 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416770-ttb49"] Dec 06 07:30:00 crc kubenswrapper[4823]: E1206 07:30:00.183024 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bfc0b86-2fe3-4e12-b105-7614d17f1eab" containerName="registry-server" Dec 06 07:30:00 crc kubenswrapper[4823]: I1206 07:30:00.183042 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bfc0b86-2fe3-4e12-b105-7614d17f1eab" containerName="registry-server" Dec 06 07:30:00 crc kubenswrapper[4823]: E1206 07:30:00.183064 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bfc0b86-2fe3-4e12-b105-7614d17f1eab" containerName="extract-utilities" Dec 06 07:30:00 crc kubenswrapper[4823]: I1206 07:30:00.183073 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bfc0b86-2fe3-4e12-b105-7614d17f1eab" containerName="extract-utilities" Dec 06 07:30:00 crc kubenswrapper[4823]: E1206 07:30:00.183103 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6db57e68-6479-4520-8b5a-ac125c403cfa" containerName="extract-utilities" Dec 06 07:30:00 crc kubenswrapper[4823]: I1206 07:30:00.183111 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db57e68-6479-4520-8b5a-ac125c403cfa" containerName="extract-utilities" Dec 06 07:30:00 crc kubenswrapper[4823]: E1206 07:30:00.183130 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bfc0b86-2fe3-4e12-b105-7614d17f1eab" containerName="extract-content" Dec 06 07:30:00 crc kubenswrapper[4823]: I1206 07:30:00.183137 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bfc0b86-2fe3-4e12-b105-7614d17f1eab" containerName="extract-content" Dec 06 07:30:00 crc kubenswrapper[4823]: E1206 07:30:00.183152 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6db57e68-6479-4520-8b5a-ac125c403cfa" containerName="extract-content" Dec 06 07:30:00 crc kubenswrapper[4823]: I1206 07:30:00.183159 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db57e68-6479-4520-8b5a-ac125c403cfa" containerName="extract-content" Dec 06 07:30:00 crc kubenswrapper[4823]: E1206 07:30:00.183173 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6db57e68-6479-4520-8b5a-ac125c403cfa" containerName="registry-server" Dec 06 07:30:00 crc kubenswrapper[4823]: I1206 07:30:00.183180 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db57e68-6479-4520-8b5a-ac125c403cfa" containerName="registry-server" Dec 06 07:30:00 crc kubenswrapper[4823]: I1206 07:30:00.183438 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bfc0b86-2fe3-4e12-b105-7614d17f1eab" containerName="registry-server" Dec 06 07:30:00 crc kubenswrapper[4823]: I1206 07:30:00.183542 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="6db57e68-6479-4520-8b5a-ac125c403cfa" containerName="registry-server" Dec 06 07:30:00 crc kubenswrapper[4823]: I1206 07:30:00.184494 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-ttb49" Dec 06 07:30:00 crc kubenswrapper[4823]: I1206 07:30:00.187314 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 07:30:00 crc kubenswrapper[4823]: I1206 07:30:00.188551 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 07:30:00 crc kubenswrapper[4823]: I1206 07:30:00.193527 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416770-ttb49"] Dec 06 07:30:00 crc kubenswrapper[4823]: I1206 07:30:00.271849 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmchs\" (UniqueName: \"kubernetes.io/projected/a8634508-63a9-419e-867f-046ef2b49a5d-kube-api-access-bmchs\") pod \"collect-profiles-29416770-ttb49\" (UID: \"a8634508-63a9-419e-867f-046ef2b49a5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-ttb49" Dec 06 07:30:00 crc kubenswrapper[4823]: I1206 07:30:00.272046 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8634508-63a9-419e-867f-046ef2b49a5d-config-volume\") pod \"collect-profiles-29416770-ttb49\" (UID: \"a8634508-63a9-419e-867f-046ef2b49a5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-ttb49" Dec 06 07:30:00 crc kubenswrapper[4823]: I1206 07:30:00.272208 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8634508-63a9-419e-867f-046ef2b49a5d-secret-volume\") pod \"collect-profiles-29416770-ttb49\" (UID: \"a8634508-63a9-419e-867f-046ef2b49a5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-ttb49" Dec 06 07:30:00 crc kubenswrapper[4823]: I1206 07:30:00.374005 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmchs\" (UniqueName: \"kubernetes.io/projected/a8634508-63a9-419e-867f-046ef2b49a5d-kube-api-access-bmchs\") pod \"collect-profiles-29416770-ttb49\" (UID: \"a8634508-63a9-419e-867f-046ef2b49a5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-ttb49" Dec 06 07:30:00 crc kubenswrapper[4823]: I1206 07:30:00.374133 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8634508-63a9-419e-867f-046ef2b49a5d-config-volume\") pod \"collect-profiles-29416770-ttb49\" (UID: \"a8634508-63a9-419e-867f-046ef2b49a5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-ttb49" Dec 06 07:30:00 crc kubenswrapper[4823]: I1206 07:30:00.374175 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8634508-63a9-419e-867f-046ef2b49a5d-secret-volume\") pod \"collect-profiles-29416770-ttb49\" (UID: \"a8634508-63a9-419e-867f-046ef2b49a5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-ttb49" Dec 06 07:30:00 crc kubenswrapper[4823]: I1206 07:30:00.375097 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8634508-63a9-419e-867f-046ef2b49a5d-config-volume\") pod \"collect-profiles-29416770-ttb49\" (UID: \"a8634508-63a9-419e-867f-046ef2b49a5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-ttb49" Dec 06 07:30:00 crc kubenswrapper[4823]: I1206 07:30:00.380199 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8634508-63a9-419e-867f-046ef2b49a5d-secret-volume\") pod \"collect-profiles-29416770-ttb49\" (UID: \"a8634508-63a9-419e-867f-046ef2b49a5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-ttb49" Dec 06 07:30:00 crc kubenswrapper[4823]: I1206 07:30:00.392142 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmchs\" (UniqueName: \"kubernetes.io/projected/a8634508-63a9-419e-867f-046ef2b49a5d-kube-api-access-bmchs\") pod \"collect-profiles-29416770-ttb49\" (UID: \"a8634508-63a9-419e-867f-046ef2b49a5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-ttb49" Dec 06 07:30:00 crc kubenswrapper[4823]: I1206 07:30:00.517532 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-ttb49" Dec 06 07:30:00 crc kubenswrapper[4823]: I1206 07:30:00.995612 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416770-ttb49"] Dec 06 07:30:01 crc kubenswrapper[4823]: I1206 07:30:01.192570 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-ttb49" event={"ID":"a8634508-63a9-419e-867f-046ef2b49a5d","Type":"ContainerStarted","Data":"4e4760f776001b7a2bacb1c2c82185067a9a346669e0de5dce938166b46114ae"} Dec 06 07:30:02 crc kubenswrapper[4823]: I1206 07:30:02.236150 4823 generic.go:334] "Generic (PLEG): container finished" podID="a8634508-63a9-419e-867f-046ef2b49a5d" containerID="5f2f5dfdfeed10ac196cc31b35b072c5a4a0aeb3cbb926fa93d693dc56ecf3ac" exitCode=0 Dec 06 07:30:02 crc kubenswrapper[4823]: I1206 07:30:02.236540 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-ttb49" event={"ID":"a8634508-63a9-419e-867f-046ef2b49a5d","Type":"ContainerDied","Data":"5f2f5dfdfeed10ac196cc31b35b072c5a4a0aeb3cbb926fa93d693dc56ecf3ac"} Dec 06 07:30:03 crc kubenswrapper[4823]: I1206 07:30:03.676863 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-ttb49" Dec 06 07:30:03 crc kubenswrapper[4823]: I1206 07:30:03.808795 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8634508-63a9-419e-867f-046ef2b49a5d-secret-volume\") pod \"a8634508-63a9-419e-867f-046ef2b49a5d\" (UID: \"a8634508-63a9-419e-867f-046ef2b49a5d\") " Dec 06 07:30:03 crc kubenswrapper[4823]: I1206 07:30:03.808953 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8634508-63a9-419e-867f-046ef2b49a5d-config-volume\") pod \"a8634508-63a9-419e-867f-046ef2b49a5d\" (UID: \"a8634508-63a9-419e-867f-046ef2b49a5d\") " Dec 06 07:30:03 crc kubenswrapper[4823]: I1206 07:30:03.809000 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmchs\" (UniqueName: \"kubernetes.io/projected/a8634508-63a9-419e-867f-046ef2b49a5d-kube-api-access-bmchs\") pod \"a8634508-63a9-419e-867f-046ef2b49a5d\" (UID: \"a8634508-63a9-419e-867f-046ef2b49a5d\") " Dec 06 07:30:03 crc kubenswrapper[4823]: I1206 07:30:03.811544 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8634508-63a9-419e-867f-046ef2b49a5d-config-volume" (OuterVolumeSpecName: "config-volume") pod "a8634508-63a9-419e-867f-046ef2b49a5d" (UID: "a8634508-63a9-419e-867f-046ef2b49a5d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:30:03 crc kubenswrapper[4823]: I1206 07:30:03.819632 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8634508-63a9-419e-867f-046ef2b49a5d-kube-api-access-bmchs" (OuterVolumeSpecName: "kube-api-access-bmchs") pod "a8634508-63a9-419e-867f-046ef2b49a5d" (UID: "a8634508-63a9-419e-867f-046ef2b49a5d"). InnerVolumeSpecName "kube-api-access-bmchs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:30:03 crc kubenswrapper[4823]: I1206 07:30:03.836847 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8634508-63a9-419e-867f-046ef2b49a5d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a8634508-63a9-419e-867f-046ef2b49a5d" (UID: "a8634508-63a9-419e-867f-046ef2b49a5d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:03 crc kubenswrapper[4823]: I1206 07:30:03.913726 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8634508-63a9-419e-867f-046ef2b49a5d-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:03 crc kubenswrapper[4823]: I1206 07:30:03.913776 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8634508-63a9-419e-867f-046ef2b49a5d-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:03 crc kubenswrapper[4823]: I1206 07:30:03.913790 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmchs\" (UniqueName: \"kubernetes.io/projected/a8634508-63a9-419e-867f-046ef2b49a5d-kube-api-access-bmchs\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:04 crc kubenswrapper[4823]: I1206 07:30:04.141529 4823 scope.go:117] "RemoveContainer" containerID="ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" Dec 06 07:30:04 crc kubenswrapper[4823]: E1206 07:30:04.141982 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:30:04 crc kubenswrapper[4823]: I1206 07:30:04.255961 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-ttb49" event={"ID":"a8634508-63a9-419e-867f-046ef2b49a5d","Type":"ContainerDied","Data":"4e4760f776001b7a2bacb1c2c82185067a9a346669e0de5dce938166b46114ae"} Dec 06 07:30:04 crc kubenswrapper[4823]: I1206 07:30:04.256026 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e4760f776001b7a2bacb1c2c82185067a9a346669e0de5dce938166b46114ae" Dec 06 07:30:04 crc kubenswrapper[4823]: I1206 07:30:04.256092 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-ttb49" Dec 06 07:30:04 crc kubenswrapper[4823]: I1206 07:30:04.822939 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416725-xs28j"] Dec 06 07:30:04 crc kubenswrapper[4823]: I1206 07:30:04.832473 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416725-xs28j"] Dec 06 07:30:05 crc kubenswrapper[4823]: I1206 07:30:05.153828 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52ebc36f-9460-42e9-a2e3-e6be86efaacb" path="/var/lib/kubelet/pods/52ebc36f-9460-42e9-a2e3-e6be86efaacb/volumes" Dec 06 07:30:15 crc kubenswrapper[4823]: I1206 07:30:15.141698 4823 scope.go:117] "RemoveContainer" containerID="ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" Dec 06 07:30:15 crc kubenswrapper[4823]: E1206 07:30:15.142466 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:30:30 crc kubenswrapper[4823]: I1206 07:30:30.141700 4823 scope.go:117] "RemoveContainer" containerID="ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" Dec 06 07:30:30 crc kubenswrapper[4823]: E1206 07:30:30.144291 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:30:44 crc kubenswrapper[4823]: I1206 07:30:44.550141 4823 scope.go:117] "RemoveContainer" containerID="8de692e587afe423c20918f3b539a4fcbab0a68c25985d26dadfb57ac291c8ff" Dec 06 07:30:45 crc kubenswrapper[4823]: I1206 07:30:45.141294 4823 scope.go:117] "RemoveContainer" containerID="ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" Dec 06 07:30:45 crc kubenswrapper[4823]: E1206 07:30:45.141806 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:30:58 crc kubenswrapper[4823]: I1206 07:30:58.140482 4823 scope.go:117] "RemoveContainer" containerID="ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" Dec 06 07:30:58 crc kubenswrapper[4823]: E1206 07:30:58.141342 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:31:10 crc kubenswrapper[4823]: I1206 07:31:10.141179 4823 scope.go:117] "RemoveContainer" containerID="ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" Dec 06 07:31:11 crc kubenswrapper[4823]: I1206 07:31:11.916034 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerStarted","Data":"85dc75ad0213c0ee14705a269e0ef01be56a6613db7e0ed707bde2badf1912ec"} Dec 06 07:32:32 crc kubenswrapper[4823]: I1206 07:32:32.407543 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jzc75"] Dec 06 07:32:32 crc kubenswrapper[4823]: E1206 07:32:32.408394 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8634508-63a9-419e-867f-046ef2b49a5d" containerName="collect-profiles" Dec 06 07:32:32 crc kubenswrapper[4823]: I1206 07:32:32.408408 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8634508-63a9-419e-867f-046ef2b49a5d" containerName="collect-profiles" Dec 06 07:32:32 crc kubenswrapper[4823]: I1206 07:32:32.408624 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8634508-63a9-419e-867f-046ef2b49a5d" containerName="collect-profiles" Dec 06 07:32:32 crc kubenswrapper[4823]: I1206 07:32:32.410213 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jzc75" Dec 06 07:32:32 crc kubenswrapper[4823]: I1206 07:32:32.426806 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jzc75"] Dec 06 07:32:32 crc kubenswrapper[4823]: I1206 07:32:32.483554 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec9d2d53-3f3e-4cab-af1c-9c2b499772eb-catalog-content\") pod \"redhat-operators-jzc75\" (UID: \"ec9d2d53-3f3e-4cab-af1c-9c2b499772eb\") " pod="openshift-marketplace/redhat-operators-jzc75" Dec 06 07:32:32 crc kubenswrapper[4823]: I1206 07:32:32.483601 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxl7q\" (UniqueName: \"kubernetes.io/projected/ec9d2d53-3f3e-4cab-af1c-9c2b499772eb-kube-api-access-nxl7q\") pod \"redhat-operators-jzc75\" (UID: \"ec9d2d53-3f3e-4cab-af1c-9c2b499772eb\") " pod="openshift-marketplace/redhat-operators-jzc75" Dec 06 07:32:32 crc kubenswrapper[4823]: I1206 07:32:32.483905 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec9d2d53-3f3e-4cab-af1c-9c2b499772eb-utilities\") pod \"redhat-operators-jzc75\" (UID: \"ec9d2d53-3f3e-4cab-af1c-9c2b499772eb\") " pod="openshift-marketplace/redhat-operators-jzc75" Dec 06 07:32:32 crc kubenswrapper[4823]: I1206 07:32:32.585571 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec9d2d53-3f3e-4cab-af1c-9c2b499772eb-utilities\") pod \"redhat-operators-jzc75\" (UID: \"ec9d2d53-3f3e-4cab-af1c-9c2b499772eb\") " pod="openshift-marketplace/redhat-operators-jzc75" Dec 06 07:32:32 crc kubenswrapper[4823]: I1206 07:32:32.585740 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec9d2d53-3f3e-4cab-af1c-9c2b499772eb-catalog-content\") pod \"redhat-operators-jzc75\" (UID: \"ec9d2d53-3f3e-4cab-af1c-9c2b499772eb\") " pod="openshift-marketplace/redhat-operators-jzc75" Dec 06 07:32:32 crc kubenswrapper[4823]: I1206 07:32:32.585760 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxl7q\" (UniqueName: \"kubernetes.io/projected/ec9d2d53-3f3e-4cab-af1c-9c2b499772eb-kube-api-access-nxl7q\") pod \"redhat-operators-jzc75\" (UID: \"ec9d2d53-3f3e-4cab-af1c-9c2b499772eb\") " pod="openshift-marketplace/redhat-operators-jzc75" Dec 06 07:32:32 crc kubenswrapper[4823]: I1206 07:32:32.586189 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec9d2d53-3f3e-4cab-af1c-9c2b499772eb-utilities\") pod \"redhat-operators-jzc75\" (UID: \"ec9d2d53-3f3e-4cab-af1c-9c2b499772eb\") " pod="openshift-marketplace/redhat-operators-jzc75" Dec 06 07:32:32 crc kubenswrapper[4823]: I1206 07:32:32.586196 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec9d2d53-3f3e-4cab-af1c-9c2b499772eb-catalog-content\") pod \"redhat-operators-jzc75\" (UID: \"ec9d2d53-3f3e-4cab-af1c-9c2b499772eb\") " pod="openshift-marketplace/redhat-operators-jzc75" Dec 06 07:32:32 crc kubenswrapper[4823]: I1206 07:32:32.614874 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxl7q\" (UniqueName: \"kubernetes.io/projected/ec9d2d53-3f3e-4cab-af1c-9c2b499772eb-kube-api-access-nxl7q\") pod \"redhat-operators-jzc75\" (UID: \"ec9d2d53-3f3e-4cab-af1c-9c2b499772eb\") " pod="openshift-marketplace/redhat-operators-jzc75" Dec 06 07:32:32 crc kubenswrapper[4823]: I1206 07:32:32.733719 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jzc75" Dec 06 07:32:33 crc kubenswrapper[4823]: I1206 07:32:33.243410 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jzc75"] Dec 06 07:32:33 crc kubenswrapper[4823]: I1206 07:32:33.708206 4823 generic.go:334] "Generic (PLEG): container finished" podID="ec9d2d53-3f3e-4cab-af1c-9c2b499772eb" containerID="167466c2265e17c1407cc8bd70c640f2d559c7293e65b482ed92a63e89e9b8fa" exitCode=0 Dec 06 07:32:33 crc kubenswrapper[4823]: I1206 07:32:33.708251 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzc75" event={"ID":"ec9d2d53-3f3e-4cab-af1c-9c2b499772eb","Type":"ContainerDied","Data":"167466c2265e17c1407cc8bd70c640f2d559c7293e65b482ed92a63e89e9b8fa"} Dec 06 07:32:33 crc kubenswrapper[4823]: I1206 07:32:33.708279 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzc75" event={"ID":"ec9d2d53-3f3e-4cab-af1c-9c2b499772eb","Type":"ContainerStarted","Data":"fe5c687bcb93c479fac61af820be9540087d52044db7ff672b095d310d9d3ab6"} Dec 06 07:32:33 crc kubenswrapper[4823]: I1206 07:32:33.710510 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 07:32:35 crc kubenswrapper[4823]: I1206 07:32:35.729330 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzc75" event={"ID":"ec9d2d53-3f3e-4cab-af1c-9c2b499772eb","Type":"ContainerStarted","Data":"d3cc0efbecade7b2ade15598208911d405005b19523eae79154416fb6be0ab7c"} Dec 06 07:32:38 crc kubenswrapper[4823]: I1206 07:32:38.757109 4823 generic.go:334] "Generic (PLEG): container finished" podID="ec9d2d53-3f3e-4cab-af1c-9c2b499772eb" containerID="d3cc0efbecade7b2ade15598208911d405005b19523eae79154416fb6be0ab7c" exitCode=0 Dec 06 07:32:38 crc kubenswrapper[4823]: I1206 07:32:38.757212 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzc75" event={"ID":"ec9d2d53-3f3e-4cab-af1c-9c2b499772eb","Type":"ContainerDied","Data":"d3cc0efbecade7b2ade15598208911d405005b19523eae79154416fb6be0ab7c"} Dec 06 07:32:39 crc kubenswrapper[4823]: I1206 07:32:39.768067 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzc75" event={"ID":"ec9d2d53-3f3e-4cab-af1c-9c2b499772eb","Type":"ContainerStarted","Data":"668ae6c1561f41299418fd78488c3020316028a0a9e91adcfbd3fa9ac4761b75"} Dec 06 07:32:39 crc kubenswrapper[4823]: I1206 07:32:39.800519 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jzc75" podStartSLOduration=2.267735569 podStartE2EDuration="7.800462788s" podCreationTimestamp="2025-12-06 07:32:32 +0000 UTC" firstStartedPulling="2025-12-06 07:32:33.710205254 +0000 UTC m=+4054.995957214" lastFinishedPulling="2025-12-06 07:32:39.242932473 +0000 UTC m=+4060.528684433" observedRunningTime="2025-12-06 07:32:39.785974459 +0000 UTC m=+4061.071726429" watchObservedRunningTime="2025-12-06 07:32:39.800462788 +0000 UTC m=+4061.086214748" Dec 06 07:32:42 crc kubenswrapper[4823]: I1206 07:32:42.734127 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jzc75" Dec 06 07:32:42 crc kubenswrapper[4823]: I1206 07:32:42.734486 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jzc75" Dec 06 07:32:43 crc kubenswrapper[4823]: I1206 07:32:43.793369 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jzc75" podUID="ec9d2d53-3f3e-4cab-af1c-9c2b499772eb" containerName="registry-server" probeResult="failure" output=< Dec 06 07:32:43 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Dec 06 07:32:43 crc kubenswrapper[4823]: > Dec 06 07:32:52 crc kubenswrapper[4823]: I1206 07:32:52.800173 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jzc75" Dec 06 07:32:52 crc kubenswrapper[4823]: I1206 07:32:52.851188 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jzc75" Dec 06 07:32:53 crc kubenswrapper[4823]: I1206 07:32:53.040231 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jzc75"] Dec 06 07:32:54 crc kubenswrapper[4823]: I1206 07:32:54.434385 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jzc75" podUID="ec9d2d53-3f3e-4cab-af1c-9c2b499772eb" containerName="registry-server" containerID="cri-o://668ae6c1561f41299418fd78488c3020316028a0a9e91adcfbd3fa9ac4761b75" gracePeriod=2 Dec 06 07:32:54 crc kubenswrapper[4823]: I1206 07:32:54.963818 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jzc75" Dec 06 07:32:54 crc kubenswrapper[4823]: I1206 07:32:54.982704 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxl7q\" (UniqueName: \"kubernetes.io/projected/ec9d2d53-3f3e-4cab-af1c-9c2b499772eb-kube-api-access-nxl7q\") pod \"ec9d2d53-3f3e-4cab-af1c-9c2b499772eb\" (UID: \"ec9d2d53-3f3e-4cab-af1c-9c2b499772eb\") " Dec 06 07:32:54 crc kubenswrapper[4823]: I1206 07:32:54.982805 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec9d2d53-3f3e-4cab-af1c-9c2b499772eb-utilities\") pod \"ec9d2d53-3f3e-4cab-af1c-9c2b499772eb\" (UID: \"ec9d2d53-3f3e-4cab-af1c-9c2b499772eb\") " Dec 06 07:32:54 crc kubenswrapper[4823]: I1206 07:32:54.982995 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec9d2d53-3f3e-4cab-af1c-9c2b499772eb-catalog-content\") pod \"ec9d2d53-3f3e-4cab-af1c-9c2b499772eb\" (UID: \"ec9d2d53-3f3e-4cab-af1c-9c2b499772eb\") " Dec 06 07:32:54 crc kubenswrapper[4823]: I1206 07:32:54.983749 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec9d2d53-3f3e-4cab-af1c-9c2b499772eb-utilities" (OuterVolumeSpecName: "utilities") pod "ec9d2d53-3f3e-4cab-af1c-9c2b499772eb" (UID: "ec9d2d53-3f3e-4cab-af1c-9c2b499772eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:32:54 crc kubenswrapper[4823]: I1206 07:32:54.989777 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec9d2d53-3f3e-4cab-af1c-9c2b499772eb-kube-api-access-nxl7q" (OuterVolumeSpecName: "kube-api-access-nxl7q") pod "ec9d2d53-3f3e-4cab-af1c-9c2b499772eb" (UID: "ec9d2d53-3f3e-4cab-af1c-9c2b499772eb"). InnerVolumeSpecName "kube-api-access-nxl7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:32:55 crc kubenswrapper[4823]: I1206 07:32:55.086128 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxl7q\" (UniqueName: \"kubernetes.io/projected/ec9d2d53-3f3e-4cab-af1c-9c2b499772eb-kube-api-access-nxl7q\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:55 crc kubenswrapper[4823]: I1206 07:32:55.086165 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec9d2d53-3f3e-4cab-af1c-9c2b499772eb-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:55 crc kubenswrapper[4823]: I1206 07:32:55.090762 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec9d2d53-3f3e-4cab-af1c-9c2b499772eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec9d2d53-3f3e-4cab-af1c-9c2b499772eb" (UID: "ec9d2d53-3f3e-4cab-af1c-9c2b499772eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:32:55 crc kubenswrapper[4823]: I1206 07:32:55.189426 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec9d2d53-3f3e-4cab-af1c-9c2b499772eb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:32:55 crc kubenswrapper[4823]: I1206 07:32:55.446507 4823 generic.go:334] "Generic (PLEG): container finished" podID="ec9d2d53-3f3e-4cab-af1c-9c2b499772eb" containerID="668ae6c1561f41299418fd78488c3020316028a0a9e91adcfbd3fa9ac4761b75" exitCode=0 Dec 06 07:32:55 crc kubenswrapper[4823]: I1206 07:32:55.446565 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzc75" event={"ID":"ec9d2d53-3f3e-4cab-af1c-9c2b499772eb","Type":"ContainerDied","Data":"668ae6c1561f41299418fd78488c3020316028a0a9e91adcfbd3fa9ac4761b75"} Dec 06 07:32:55 crc kubenswrapper[4823]: I1206 07:32:55.446600 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzc75" event={"ID":"ec9d2d53-3f3e-4cab-af1c-9c2b499772eb","Type":"ContainerDied","Data":"fe5c687bcb93c479fac61af820be9540087d52044db7ff672b095d310d9d3ab6"} Dec 06 07:32:55 crc kubenswrapper[4823]: I1206 07:32:55.446624 4823 scope.go:117] "RemoveContainer" containerID="668ae6c1561f41299418fd78488c3020316028a0a9e91adcfbd3fa9ac4761b75" Dec 06 07:32:55 crc kubenswrapper[4823]: I1206 07:32:55.446654 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jzc75" Dec 06 07:32:55 crc kubenswrapper[4823]: I1206 07:32:55.470553 4823 scope.go:117] "RemoveContainer" containerID="d3cc0efbecade7b2ade15598208911d405005b19523eae79154416fb6be0ab7c" Dec 06 07:32:55 crc kubenswrapper[4823]: I1206 07:32:55.474186 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jzc75"] Dec 06 07:32:55 crc kubenswrapper[4823]: I1206 07:32:55.482794 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jzc75"] Dec 06 07:32:55 crc kubenswrapper[4823]: I1206 07:32:55.492326 4823 scope.go:117] "RemoveContainer" containerID="167466c2265e17c1407cc8bd70c640f2d559c7293e65b482ed92a63e89e9b8fa" Dec 06 07:32:55 crc kubenswrapper[4823]: I1206 07:32:55.603384 4823 scope.go:117] "RemoveContainer" containerID="668ae6c1561f41299418fd78488c3020316028a0a9e91adcfbd3fa9ac4761b75" Dec 06 07:32:55 crc kubenswrapper[4823]: E1206 07:32:55.603937 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"668ae6c1561f41299418fd78488c3020316028a0a9e91adcfbd3fa9ac4761b75\": container with ID starting with 668ae6c1561f41299418fd78488c3020316028a0a9e91adcfbd3fa9ac4761b75 not found: ID does not exist" containerID="668ae6c1561f41299418fd78488c3020316028a0a9e91adcfbd3fa9ac4761b75" Dec 06 07:32:55 crc kubenswrapper[4823]: I1206 07:32:55.603978 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"668ae6c1561f41299418fd78488c3020316028a0a9e91adcfbd3fa9ac4761b75"} err="failed to get container status \"668ae6c1561f41299418fd78488c3020316028a0a9e91adcfbd3fa9ac4761b75\": rpc error: code = NotFound desc = could not find container \"668ae6c1561f41299418fd78488c3020316028a0a9e91adcfbd3fa9ac4761b75\": container with ID starting with 668ae6c1561f41299418fd78488c3020316028a0a9e91adcfbd3fa9ac4761b75 not found: ID does not exist" Dec 06 07:32:55 crc kubenswrapper[4823]: I1206 07:32:55.604006 4823 scope.go:117] "RemoveContainer" containerID="d3cc0efbecade7b2ade15598208911d405005b19523eae79154416fb6be0ab7c" Dec 06 07:32:55 crc kubenswrapper[4823]: E1206 07:32:55.604249 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3cc0efbecade7b2ade15598208911d405005b19523eae79154416fb6be0ab7c\": container with ID starting with d3cc0efbecade7b2ade15598208911d405005b19523eae79154416fb6be0ab7c not found: ID does not exist" containerID="d3cc0efbecade7b2ade15598208911d405005b19523eae79154416fb6be0ab7c" Dec 06 07:32:55 crc kubenswrapper[4823]: I1206 07:32:55.604278 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3cc0efbecade7b2ade15598208911d405005b19523eae79154416fb6be0ab7c"} err="failed to get container status \"d3cc0efbecade7b2ade15598208911d405005b19523eae79154416fb6be0ab7c\": rpc error: code = NotFound desc = could not find container \"d3cc0efbecade7b2ade15598208911d405005b19523eae79154416fb6be0ab7c\": container with ID starting with d3cc0efbecade7b2ade15598208911d405005b19523eae79154416fb6be0ab7c not found: ID does not exist" Dec 06 07:32:55 crc kubenswrapper[4823]: I1206 07:32:55.604296 4823 scope.go:117] "RemoveContainer" containerID="167466c2265e17c1407cc8bd70c640f2d559c7293e65b482ed92a63e89e9b8fa" Dec 06 07:32:55 crc kubenswrapper[4823]: E1206 07:32:55.604510 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"167466c2265e17c1407cc8bd70c640f2d559c7293e65b482ed92a63e89e9b8fa\": container with ID starting with 167466c2265e17c1407cc8bd70c640f2d559c7293e65b482ed92a63e89e9b8fa not found: ID does not exist" containerID="167466c2265e17c1407cc8bd70c640f2d559c7293e65b482ed92a63e89e9b8fa" Dec 06 07:32:55 crc kubenswrapper[4823]: I1206 07:32:55.604538 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"167466c2265e17c1407cc8bd70c640f2d559c7293e65b482ed92a63e89e9b8fa"} err="failed to get container status \"167466c2265e17c1407cc8bd70c640f2d559c7293e65b482ed92a63e89e9b8fa\": rpc error: code = NotFound desc = could not find container \"167466c2265e17c1407cc8bd70c640f2d559c7293e65b482ed92a63e89e9b8fa\": container with ID starting with 167466c2265e17c1407cc8bd70c640f2d559c7293e65b482ed92a63e89e9b8fa not found: ID does not exist" Dec 06 07:32:57 crc kubenswrapper[4823]: I1206 07:32:57.155567 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec9d2d53-3f3e-4cab-af1c-9c2b499772eb" path="/var/lib/kubelet/pods/ec9d2d53-3f3e-4cab-af1c-9c2b499772eb/volumes" Dec 06 07:33:36 crc kubenswrapper[4823]: I1206 07:33:36.052231 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:33:36 crc kubenswrapper[4823]: I1206 07:33:36.052680 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:34:06 crc kubenswrapper[4823]: I1206 07:34:06.053773 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:34:06 crc kubenswrapper[4823]: I1206 07:34:06.054413 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:34:36 crc kubenswrapper[4823]: I1206 07:34:36.052086 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:34:36 crc kubenswrapper[4823]: I1206 07:34:36.052707 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:34:36 crc kubenswrapper[4823]: I1206 07:34:36.052760 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 07:34:36 crc kubenswrapper[4823]: I1206 07:34:36.053635 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"85dc75ad0213c0ee14705a269e0ef01be56a6613db7e0ed707bde2badf1912ec"} pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:34:36 crc kubenswrapper[4823]: I1206 07:34:36.053720 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" containerID="cri-o://85dc75ad0213c0ee14705a269e0ef01be56a6613db7e0ed707bde2badf1912ec" gracePeriod=600 Dec 06 07:34:36 crc kubenswrapper[4823]: I1206 07:34:36.584053 4823 generic.go:334] "Generic (PLEG): container finished" podID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerID="85dc75ad0213c0ee14705a269e0ef01be56a6613db7e0ed707bde2badf1912ec" exitCode=0 Dec 06 07:34:36 crc kubenswrapper[4823]: I1206 07:34:36.584110 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerDied","Data":"85dc75ad0213c0ee14705a269e0ef01be56a6613db7e0ed707bde2badf1912ec"} Dec 06 07:34:36 crc kubenswrapper[4823]: I1206 07:34:36.584597 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerStarted","Data":"3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d"} Dec 06 07:34:36 crc kubenswrapper[4823]: I1206 07:34:36.584619 4823 scope.go:117] "RemoveContainer" containerID="ac7b395060e5f061d9a2140242696a13fc13a70095164145fe14c5db93ca3e62" Dec 06 07:35:13 crc kubenswrapper[4823]: I1206 07:35:13.396777 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pxbrj"] Dec 06 07:35:13 crc kubenswrapper[4823]: E1206 07:35:13.397767 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec9d2d53-3f3e-4cab-af1c-9c2b499772eb" containerName="extract-content" Dec 06 07:35:13 crc kubenswrapper[4823]: I1206 07:35:13.397783 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec9d2d53-3f3e-4cab-af1c-9c2b499772eb" containerName="extract-content" Dec 06 07:35:13 crc kubenswrapper[4823]: E1206 07:35:13.397807 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec9d2d53-3f3e-4cab-af1c-9c2b499772eb" containerName="registry-server" Dec 06 07:35:13 crc kubenswrapper[4823]: I1206 07:35:13.397814 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec9d2d53-3f3e-4cab-af1c-9c2b499772eb" containerName="registry-server" Dec 06 07:35:13 crc kubenswrapper[4823]: E1206 07:35:13.397839 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec9d2d53-3f3e-4cab-af1c-9c2b499772eb" containerName="extract-utilities" Dec 06 07:35:13 crc kubenswrapper[4823]: I1206 07:35:13.397849 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec9d2d53-3f3e-4cab-af1c-9c2b499772eb" containerName="extract-utilities" Dec 06 07:35:13 crc kubenswrapper[4823]: I1206 07:35:13.398077 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec9d2d53-3f3e-4cab-af1c-9c2b499772eb" containerName="registry-server" Dec 06 07:35:13 crc kubenswrapper[4823]: I1206 07:35:13.399698 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pxbrj" Dec 06 07:35:13 crc kubenswrapper[4823]: I1206 07:35:13.462155 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pxbrj"] Dec 06 07:35:13 crc kubenswrapper[4823]: I1206 07:35:13.468568 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvpjr\" (UniqueName: \"kubernetes.io/projected/296fc094-d06b-409f-94ce-4e29911f8837-kube-api-access-xvpjr\") pod \"certified-operators-pxbrj\" (UID: \"296fc094-d06b-409f-94ce-4e29911f8837\") " pod="openshift-marketplace/certified-operators-pxbrj" Dec 06 07:35:13 crc kubenswrapper[4823]: I1206 07:35:13.468740 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/296fc094-d06b-409f-94ce-4e29911f8837-utilities\") pod \"certified-operators-pxbrj\" (UID: \"296fc094-d06b-409f-94ce-4e29911f8837\") " pod="openshift-marketplace/certified-operators-pxbrj" Dec 06 07:35:13 crc kubenswrapper[4823]: I1206 07:35:13.469048 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/296fc094-d06b-409f-94ce-4e29911f8837-catalog-content\") pod \"certified-operators-pxbrj\" (UID: \"296fc094-d06b-409f-94ce-4e29911f8837\") " pod="openshift-marketplace/certified-operators-pxbrj" Dec 06 07:35:13 crc kubenswrapper[4823]: I1206 07:35:13.570947 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/296fc094-d06b-409f-94ce-4e29911f8837-catalog-content\") pod \"certified-operators-pxbrj\" (UID: \"296fc094-d06b-409f-94ce-4e29911f8837\") " pod="openshift-marketplace/certified-operators-pxbrj" Dec 06 07:35:13 crc kubenswrapper[4823]: I1206 07:35:13.571221 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvpjr\" (UniqueName: \"kubernetes.io/projected/296fc094-d06b-409f-94ce-4e29911f8837-kube-api-access-xvpjr\") pod \"certified-operators-pxbrj\" (UID: \"296fc094-d06b-409f-94ce-4e29911f8837\") " pod="openshift-marketplace/certified-operators-pxbrj" Dec 06 07:35:13 crc kubenswrapper[4823]: I1206 07:35:13.571353 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/296fc094-d06b-409f-94ce-4e29911f8837-utilities\") pod \"certified-operators-pxbrj\" (UID: \"296fc094-d06b-409f-94ce-4e29911f8837\") " pod="openshift-marketplace/certified-operators-pxbrj" Dec 06 07:35:13 crc kubenswrapper[4823]: I1206 07:35:13.571522 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/296fc094-d06b-409f-94ce-4e29911f8837-catalog-content\") pod \"certified-operators-pxbrj\" (UID: \"296fc094-d06b-409f-94ce-4e29911f8837\") " pod="openshift-marketplace/certified-operators-pxbrj" Dec 06 07:35:13 crc kubenswrapper[4823]: I1206 07:35:13.571722 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/296fc094-d06b-409f-94ce-4e29911f8837-utilities\") pod \"certified-operators-pxbrj\" (UID: \"296fc094-d06b-409f-94ce-4e29911f8837\") " pod="openshift-marketplace/certified-operators-pxbrj" Dec 06 07:35:13 crc kubenswrapper[4823]: I1206 07:35:13.590158 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvpjr\" (UniqueName: \"kubernetes.io/projected/296fc094-d06b-409f-94ce-4e29911f8837-kube-api-access-xvpjr\") pod \"certified-operators-pxbrj\" (UID: \"296fc094-d06b-409f-94ce-4e29911f8837\") " pod="openshift-marketplace/certified-operators-pxbrj" Dec 06 07:35:13 crc kubenswrapper[4823]: I1206 07:35:13.725947 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pxbrj" Dec 06 07:35:14 crc kubenswrapper[4823]: I1206 07:35:14.356309 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pxbrj"] Dec 06 07:35:14 crc kubenswrapper[4823]: I1206 07:35:14.948180 4823 generic.go:334] "Generic (PLEG): container finished" podID="296fc094-d06b-409f-94ce-4e29911f8837" containerID="7b0f44229f3e0877eb649d20046d5268553671b7f63d132989920884dbb43880" exitCode=0 Dec 06 07:35:14 crc kubenswrapper[4823]: I1206 07:35:14.948436 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxbrj" event={"ID":"296fc094-d06b-409f-94ce-4e29911f8837","Type":"ContainerDied","Data":"7b0f44229f3e0877eb649d20046d5268553671b7f63d132989920884dbb43880"} Dec 06 07:35:14 crc kubenswrapper[4823]: I1206 07:35:14.948463 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxbrj" event={"ID":"296fc094-d06b-409f-94ce-4e29911f8837","Type":"ContainerStarted","Data":"48b3c6bd69ce03fbcd281c15faece02e833596a4ab2674d3808e5f410ce3f9dd"} Dec 06 07:35:15 crc kubenswrapper[4823]: I1206 07:35:15.958495 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxbrj" event={"ID":"296fc094-d06b-409f-94ce-4e29911f8837","Type":"ContainerStarted","Data":"9e916811e02f9ddd51a6823f156294cb041d454aaa86f985321c53cb103abd41"} Dec 06 07:35:16 crc kubenswrapper[4823]: I1206 07:35:16.970990 4823 generic.go:334] "Generic (PLEG): container finished" podID="296fc094-d06b-409f-94ce-4e29911f8837" containerID="9e916811e02f9ddd51a6823f156294cb041d454aaa86f985321c53cb103abd41" exitCode=0 Dec 06 07:35:16 crc kubenswrapper[4823]: I1206 07:35:16.971031 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxbrj" event={"ID":"296fc094-d06b-409f-94ce-4e29911f8837","Type":"ContainerDied","Data":"9e916811e02f9ddd51a6823f156294cb041d454aaa86f985321c53cb103abd41"} Dec 06 07:35:17 crc kubenswrapper[4823]: I1206 07:35:17.982403 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pj4n9"] Dec 06 07:35:17 crc kubenswrapper[4823]: I1206 07:35:17.988846 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxbrj" event={"ID":"296fc094-d06b-409f-94ce-4e29911f8837","Type":"ContainerStarted","Data":"657b0897b87240a6d2d83c2907268713a218221960ac5e27b1f1cd488855532c"} Dec 06 07:35:17 crc kubenswrapper[4823]: I1206 07:35:17.989006 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pj4n9" Dec 06 07:35:18 crc kubenswrapper[4823]: I1206 07:35:18.042556 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pj4n9"] Dec 06 07:35:18 crc kubenswrapper[4823]: I1206 07:35:18.047042 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pxbrj" podStartSLOduration=2.6135763499999998 podStartE2EDuration="5.047004162s" podCreationTimestamp="2025-12-06 07:35:13 +0000 UTC" firstStartedPulling="2025-12-06 07:35:14.951912649 +0000 UTC m=+4216.237664619" lastFinishedPulling="2025-12-06 07:35:17.385340471 +0000 UTC m=+4218.671092431" observedRunningTime="2025-12-06 07:35:18.017388198 +0000 UTC m=+4219.303140158" watchObservedRunningTime="2025-12-06 07:35:18.047004162 +0000 UTC m=+4219.332756122" Dec 06 07:35:18 crc kubenswrapper[4823]: I1206 07:35:18.068676 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rw7t\" (UniqueName: \"kubernetes.io/projected/0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe-kube-api-access-6rw7t\") pod \"redhat-marketplace-pj4n9\" (UID: \"0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe\") " pod="openshift-marketplace/redhat-marketplace-pj4n9" Dec 06 07:35:18 crc kubenswrapper[4823]: I1206 07:35:18.068790 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe-utilities\") pod \"redhat-marketplace-pj4n9\" (UID: \"0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe\") " pod="openshift-marketplace/redhat-marketplace-pj4n9" Dec 06 07:35:18 crc kubenswrapper[4823]: I1206 07:35:18.068830 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe-catalog-content\") pod \"redhat-marketplace-pj4n9\" (UID: \"0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe\") " pod="openshift-marketplace/redhat-marketplace-pj4n9" Dec 06 07:35:18 crc kubenswrapper[4823]: I1206 07:35:18.171194 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe-utilities\") pod \"redhat-marketplace-pj4n9\" (UID: \"0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe\") " pod="openshift-marketplace/redhat-marketplace-pj4n9" Dec 06 07:35:18 crc kubenswrapper[4823]: I1206 07:35:18.171561 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe-catalog-content\") pod \"redhat-marketplace-pj4n9\" (UID: \"0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe\") " pod="openshift-marketplace/redhat-marketplace-pj4n9" Dec 06 07:35:18 crc kubenswrapper[4823]: I1206 07:35:18.171851 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rw7t\" (UniqueName: \"kubernetes.io/projected/0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe-kube-api-access-6rw7t\") pod \"redhat-marketplace-pj4n9\" (UID: \"0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe\") " pod="openshift-marketplace/redhat-marketplace-pj4n9" Dec 06 07:35:18 crc kubenswrapper[4823]: I1206 07:35:18.172833 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe-utilities\") pod \"redhat-marketplace-pj4n9\" (UID: \"0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe\") " pod="openshift-marketplace/redhat-marketplace-pj4n9" Dec 06 07:35:18 crc kubenswrapper[4823]: I1206 07:35:18.172852 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe-catalog-content\") pod \"redhat-marketplace-pj4n9\" (UID: \"0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe\") " pod="openshift-marketplace/redhat-marketplace-pj4n9" Dec 06 07:35:18 crc kubenswrapper[4823]: I1206 07:35:18.201613 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rw7t\" (UniqueName: \"kubernetes.io/projected/0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe-kube-api-access-6rw7t\") pod \"redhat-marketplace-pj4n9\" (UID: \"0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe\") " pod="openshift-marketplace/redhat-marketplace-pj4n9" Dec 06 07:35:18 crc kubenswrapper[4823]: I1206 07:35:18.322757 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pj4n9" Dec 06 07:35:18 crc kubenswrapper[4823]: I1206 07:35:18.831337 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pj4n9"] Dec 06 07:35:19 crc kubenswrapper[4823]: W1206 07:35:19.220192 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e5a1474_a36c_4c23_8a43_e3aa06c0fdfe.slice/crio-79446d16d86235bd3fb9394ea4c11319aa6a2fbf1f72947b5aa2a2aa90e9dab0 WatchSource:0}: Error finding container 79446d16d86235bd3fb9394ea4c11319aa6a2fbf1f72947b5aa2a2aa90e9dab0: Status 404 returned error can't find the container with id 79446d16d86235bd3fb9394ea4c11319aa6a2fbf1f72947b5aa2a2aa90e9dab0 Dec 06 07:35:20 crc kubenswrapper[4823]: I1206 07:35:20.005849 4823 generic.go:334] "Generic (PLEG): container finished" podID="0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe" containerID="66f467687de751dce4f8045cd10b57120d96e47834c93bca54cc4a6ef728316c" exitCode=0 Dec 06 07:35:20 crc kubenswrapper[4823]: I1206 07:35:20.005926 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pj4n9" event={"ID":"0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe","Type":"ContainerDied","Data":"66f467687de751dce4f8045cd10b57120d96e47834c93bca54cc4a6ef728316c"} Dec 06 07:35:20 crc kubenswrapper[4823]: I1206 07:35:20.006184 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pj4n9" event={"ID":"0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe","Type":"ContainerStarted","Data":"79446d16d86235bd3fb9394ea4c11319aa6a2fbf1f72947b5aa2a2aa90e9dab0"} Dec 06 07:35:22 crc kubenswrapper[4823]: I1206 07:35:22.025437 4823 generic.go:334] "Generic (PLEG): container finished" podID="0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe" containerID="75b2722d7b8db30e4af220a76942ae12cc260340400a411e2d4762031476ddf8" exitCode=0 Dec 06 07:35:22 crc kubenswrapper[4823]: I1206 07:35:22.026035 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pj4n9" event={"ID":"0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe","Type":"ContainerDied","Data":"75b2722d7b8db30e4af220a76942ae12cc260340400a411e2d4762031476ddf8"} Dec 06 07:35:23 crc kubenswrapper[4823]: I1206 07:35:23.037746 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pj4n9" event={"ID":"0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe","Type":"ContainerStarted","Data":"8c6e205ec67af27f0c152cdf5b23df11fc24527348f71d155ade66e729d7d4fb"} Dec 06 07:35:23 crc kubenswrapper[4823]: I1206 07:35:23.069849 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pj4n9" podStartSLOduration=3.677220039 podStartE2EDuration="6.069825705s" podCreationTimestamp="2025-12-06 07:35:17 +0000 UTC" firstStartedPulling="2025-12-06 07:35:20.008244518 +0000 UTC m=+4221.293996478" lastFinishedPulling="2025-12-06 07:35:22.400850174 +0000 UTC m=+4223.686602144" observedRunningTime="2025-12-06 07:35:23.056803279 +0000 UTC m=+4224.342555249" watchObservedRunningTime="2025-12-06 07:35:23.069825705 +0000 UTC m=+4224.355577675" Dec 06 07:35:23 crc kubenswrapper[4823]: I1206 07:35:23.726354 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pxbrj" Dec 06 07:35:23 crc kubenswrapper[4823]: I1206 07:35:23.726425 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pxbrj" Dec 06 07:35:23 crc kubenswrapper[4823]: I1206 07:35:23.782871 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pxbrj" Dec 06 07:35:24 crc kubenswrapper[4823]: I1206 07:35:24.097522 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pxbrj" Dec 06 07:35:25 crc kubenswrapper[4823]: I1206 07:35:25.971730 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pxbrj"] Dec 06 07:35:26 crc kubenswrapper[4823]: I1206 07:35:26.073864 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pxbrj" podUID="296fc094-d06b-409f-94ce-4e29911f8837" containerName="registry-server" containerID="cri-o://657b0897b87240a6d2d83c2907268713a218221960ac5e27b1f1cd488855532c" gracePeriod=2 Dec 06 07:35:27 crc kubenswrapper[4823]: I1206 07:35:27.093412 4823 generic.go:334] "Generic (PLEG): container finished" podID="296fc094-d06b-409f-94ce-4e29911f8837" containerID="657b0897b87240a6d2d83c2907268713a218221960ac5e27b1f1cd488855532c" exitCode=0 Dec 06 07:35:27 crc kubenswrapper[4823]: I1206 07:35:27.093704 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxbrj" event={"ID":"296fc094-d06b-409f-94ce-4e29911f8837","Type":"ContainerDied","Data":"657b0897b87240a6d2d83c2907268713a218221960ac5e27b1f1cd488855532c"} Dec 06 07:35:27 crc kubenswrapper[4823]: I1206 07:35:27.093732 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxbrj" event={"ID":"296fc094-d06b-409f-94ce-4e29911f8837","Type":"ContainerDied","Data":"48b3c6bd69ce03fbcd281c15faece02e833596a4ab2674d3808e5f410ce3f9dd"} Dec 06 07:35:27 crc kubenswrapper[4823]: I1206 07:35:27.093761 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48b3c6bd69ce03fbcd281c15faece02e833596a4ab2674d3808e5f410ce3f9dd" Dec 06 07:35:27 crc kubenswrapper[4823]: I1206 07:35:27.171219 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pxbrj" Dec 06 07:35:27 crc kubenswrapper[4823]: I1206 07:35:27.201824 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvpjr\" (UniqueName: \"kubernetes.io/projected/296fc094-d06b-409f-94ce-4e29911f8837-kube-api-access-xvpjr\") pod \"296fc094-d06b-409f-94ce-4e29911f8837\" (UID: \"296fc094-d06b-409f-94ce-4e29911f8837\") " Dec 06 07:35:27 crc kubenswrapper[4823]: I1206 07:35:27.201888 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/296fc094-d06b-409f-94ce-4e29911f8837-catalog-content\") pod \"296fc094-d06b-409f-94ce-4e29911f8837\" (UID: \"296fc094-d06b-409f-94ce-4e29911f8837\") " Dec 06 07:35:27 crc kubenswrapper[4823]: I1206 07:35:27.201998 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/296fc094-d06b-409f-94ce-4e29911f8837-utilities\") pod \"296fc094-d06b-409f-94ce-4e29911f8837\" (UID: \"296fc094-d06b-409f-94ce-4e29911f8837\") " Dec 06 07:35:27 crc kubenswrapper[4823]: I1206 07:35:27.304650 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/296fc094-d06b-409f-94ce-4e29911f8837-utilities" (OuterVolumeSpecName: "utilities") pod "296fc094-d06b-409f-94ce-4e29911f8837" (UID: "296fc094-d06b-409f-94ce-4e29911f8837"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:35:27 crc kubenswrapper[4823]: I1206 07:35:27.307935 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/296fc094-d06b-409f-94ce-4e29911f8837-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:35:27 crc kubenswrapper[4823]: I1206 07:35:27.312629 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/296fc094-d06b-409f-94ce-4e29911f8837-kube-api-access-xvpjr" (OuterVolumeSpecName: "kube-api-access-xvpjr") pod "296fc094-d06b-409f-94ce-4e29911f8837" (UID: "296fc094-d06b-409f-94ce-4e29911f8837"). InnerVolumeSpecName "kube-api-access-xvpjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:35:27 crc kubenswrapper[4823]: I1206 07:35:27.374373 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/296fc094-d06b-409f-94ce-4e29911f8837-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "296fc094-d06b-409f-94ce-4e29911f8837" (UID: "296fc094-d06b-409f-94ce-4e29911f8837"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:35:27 crc kubenswrapper[4823]: I1206 07:35:27.409412 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/296fc094-d06b-409f-94ce-4e29911f8837-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:35:27 crc kubenswrapper[4823]: I1206 07:35:27.409455 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvpjr\" (UniqueName: \"kubernetes.io/projected/296fc094-d06b-409f-94ce-4e29911f8837-kube-api-access-xvpjr\") on node \"crc\" DevicePath \"\"" Dec 06 07:35:28 crc kubenswrapper[4823]: I1206 07:35:28.102173 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pxbrj" Dec 06 07:35:28 crc kubenswrapper[4823]: I1206 07:35:28.140974 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pxbrj"] Dec 06 07:35:28 crc kubenswrapper[4823]: I1206 07:35:28.157806 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pxbrj"] Dec 06 07:35:28 crc kubenswrapper[4823]: I1206 07:35:28.324098 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pj4n9" Dec 06 07:35:28 crc kubenswrapper[4823]: I1206 07:35:28.324474 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pj4n9" Dec 06 07:35:28 crc kubenswrapper[4823]: I1206 07:35:28.381721 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pj4n9" Dec 06 07:35:29 crc kubenswrapper[4823]: I1206 07:35:29.157839 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="296fc094-d06b-409f-94ce-4e29911f8837" path="/var/lib/kubelet/pods/296fc094-d06b-409f-94ce-4e29911f8837/volumes" Dec 06 07:35:29 crc kubenswrapper[4823]: I1206 07:35:29.163376 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pj4n9" Dec 06 07:35:30 crc kubenswrapper[4823]: I1206 07:35:30.372730 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pj4n9"] Dec 06 07:35:31 crc kubenswrapper[4823]: I1206 07:35:31.132148 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pj4n9" podUID="0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe" containerName="registry-server" containerID="cri-o://8c6e205ec67af27f0c152cdf5b23df11fc24527348f71d155ade66e729d7d4fb" gracePeriod=2 Dec 06 07:35:32 crc kubenswrapper[4823]: I1206 07:35:32.156187 4823 generic.go:334] "Generic (PLEG): container finished" podID="0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe" containerID="8c6e205ec67af27f0c152cdf5b23df11fc24527348f71d155ade66e729d7d4fb" exitCode=0 Dec 06 07:35:32 crc kubenswrapper[4823]: I1206 07:35:32.156262 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pj4n9" event={"ID":"0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe","Type":"ContainerDied","Data":"8c6e205ec67af27f0c152cdf5b23df11fc24527348f71d155ade66e729d7d4fb"} Dec 06 07:35:32 crc kubenswrapper[4823]: I1206 07:35:32.156516 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pj4n9" event={"ID":"0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe","Type":"ContainerDied","Data":"79446d16d86235bd3fb9394ea4c11319aa6a2fbf1f72947b5aa2a2aa90e9dab0"} Dec 06 07:35:32 crc kubenswrapper[4823]: I1206 07:35:32.156536 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79446d16d86235bd3fb9394ea4c11319aa6a2fbf1f72947b5aa2a2aa90e9dab0" Dec 06 07:35:32 crc kubenswrapper[4823]: I1206 07:35:32.267490 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pj4n9" Dec 06 07:35:32 crc kubenswrapper[4823]: I1206 07:35:32.411881 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe-catalog-content\") pod \"0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe\" (UID: \"0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe\") " Dec 06 07:35:32 crc kubenswrapper[4823]: I1206 07:35:32.412085 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe-utilities\") pod \"0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe\" (UID: \"0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe\") " Dec 06 07:35:32 crc kubenswrapper[4823]: I1206 07:35:32.412467 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rw7t\" (UniqueName: \"kubernetes.io/projected/0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe-kube-api-access-6rw7t\") pod \"0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe\" (UID: \"0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe\") " Dec 06 07:35:32 crc kubenswrapper[4823]: I1206 07:35:32.414861 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe-utilities" (OuterVolumeSpecName: "utilities") pod "0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe" (UID: "0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:35:32 crc kubenswrapper[4823]: I1206 07:35:32.434798 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe-kube-api-access-6rw7t" (OuterVolumeSpecName: "kube-api-access-6rw7t") pod "0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe" (UID: "0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe"). InnerVolumeSpecName "kube-api-access-6rw7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:35:32 crc kubenswrapper[4823]: I1206 07:35:32.437627 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe" (UID: "0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:35:32 crc kubenswrapper[4823]: I1206 07:35:32.515381 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rw7t\" (UniqueName: \"kubernetes.io/projected/0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe-kube-api-access-6rw7t\") on node \"crc\" DevicePath \"\"" Dec 06 07:35:32 crc kubenswrapper[4823]: I1206 07:35:32.515423 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:35:32 crc kubenswrapper[4823]: I1206 07:35:32.515433 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:35:33 crc kubenswrapper[4823]: I1206 07:35:33.164990 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pj4n9" Dec 06 07:35:33 crc kubenswrapper[4823]: I1206 07:35:33.193826 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pj4n9"] Dec 06 07:35:33 crc kubenswrapper[4823]: I1206 07:35:33.202951 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pj4n9"] Dec 06 07:35:35 crc kubenswrapper[4823]: I1206 07:35:35.156025 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe" path="/var/lib/kubelet/pods/0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe/volumes" Dec 06 07:36:36 crc kubenswrapper[4823]: I1206 07:36:36.052017 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:36:36 crc kubenswrapper[4823]: I1206 07:36:36.052519 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:37:06 crc kubenswrapper[4823]: I1206 07:37:06.052293 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:37:06 crc kubenswrapper[4823]: I1206 07:37:06.052989 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:37:36 crc kubenswrapper[4823]: I1206 07:37:36.051506 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:37:36 crc kubenswrapper[4823]: I1206 07:37:36.052033 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:37:36 crc kubenswrapper[4823]: I1206 07:37:36.052104 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 07:37:36 crc kubenswrapper[4823]: I1206 07:37:36.052851 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d"} pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:37:36 crc kubenswrapper[4823]: I1206 07:37:36.052899 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" containerID="cri-o://3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" gracePeriod=600 Dec 06 07:37:36 crc kubenswrapper[4823]: E1206 07:37:36.178888 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:37:36 crc kubenswrapper[4823]: I1206 07:37:36.434900 4823 generic.go:334] "Generic (PLEG): container finished" podID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerID="3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" exitCode=0 Dec 06 07:37:36 crc kubenswrapper[4823]: I1206 07:37:36.434970 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerDied","Data":"3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d"} Dec 06 07:37:36 crc kubenswrapper[4823]: I1206 07:37:36.435084 4823 scope.go:117] "RemoveContainer" containerID="85dc75ad0213c0ee14705a269e0ef01be56a6613db7e0ed707bde2badf1912ec" Dec 06 07:37:36 crc kubenswrapper[4823]: I1206 07:37:36.435786 4823 scope.go:117] "RemoveContainer" containerID="3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" Dec 06 07:37:36 crc kubenswrapper[4823]: E1206 07:37:36.436053 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:37:51 crc kubenswrapper[4823]: I1206 07:37:51.142104 4823 scope.go:117] "RemoveContainer" containerID="3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" Dec 06 07:37:51 crc kubenswrapper[4823]: E1206 07:37:51.142946 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:38:03 crc kubenswrapper[4823]: I1206 07:38:03.141782 4823 scope.go:117] "RemoveContainer" containerID="3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" Dec 06 07:38:03 crc kubenswrapper[4823]: E1206 07:38:03.142628 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:38:18 crc kubenswrapper[4823]: I1206 07:38:18.141519 4823 scope.go:117] "RemoveContainer" containerID="3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" Dec 06 07:38:18 crc kubenswrapper[4823]: E1206 07:38:18.142410 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:38:32 crc kubenswrapper[4823]: I1206 07:38:32.141284 4823 scope.go:117] "RemoveContainer" containerID="3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" Dec 06 07:38:32 crc kubenswrapper[4823]: E1206 07:38:32.142042 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:38:45 crc kubenswrapper[4823]: I1206 07:38:45.140907 4823 scope.go:117] "RemoveContainer" containerID="3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" Dec 06 07:38:45 crc kubenswrapper[4823]: E1206 07:38:45.141999 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:39:00 crc kubenswrapper[4823]: I1206 07:39:00.141410 4823 scope.go:117] "RemoveContainer" containerID="3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" Dec 06 07:39:00 crc kubenswrapper[4823]: E1206 07:39:00.142163 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:39:11 crc kubenswrapper[4823]: I1206 07:39:11.144874 4823 scope.go:117] "RemoveContainer" containerID="3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" Dec 06 07:39:11 crc kubenswrapper[4823]: E1206 07:39:11.145742 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:39:24 crc kubenswrapper[4823]: I1206 07:39:24.141203 4823 scope.go:117] "RemoveContainer" containerID="3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" Dec 06 07:39:24 crc kubenswrapper[4823]: E1206 07:39:24.142087 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:39:24 crc kubenswrapper[4823]: I1206 07:39:24.819917 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5tnvt"] Dec 06 07:39:24 crc kubenswrapper[4823]: E1206 07:39:24.820811 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="296fc094-d06b-409f-94ce-4e29911f8837" containerName="extract-content" Dec 06 07:39:24 crc kubenswrapper[4823]: I1206 07:39:24.820832 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="296fc094-d06b-409f-94ce-4e29911f8837" containerName="extract-content" Dec 06 07:39:24 crc kubenswrapper[4823]: E1206 07:39:24.820843 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe" containerName="registry-server" Dec 06 07:39:24 crc kubenswrapper[4823]: I1206 07:39:24.820850 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe" containerName="registry-server" Dec 06 07:39:24 crc kubenswrapper[4823]: E1206 07:39:24.820881 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe" containerName="extract-content" Dec 06 07:39:24 crc kubenswrapper[4823]: I1206 07:39:24.820889 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe" containerName="extract-content" Dec 06 07:39:24 crc kubenswrapper[4823]: E1206 07:39:24.820901 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe" containerName="extract-utilities" Dec 06 07:39:24 crc kubenswrapper[4823]: I1206 07:39:24.820908 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe" containerName="extract-utilities" Dec 06 07:39:24 crc kubenswrapper[4823]: E1206 07:39:24.820926 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="296fc094-d06b-409f-94ce-4e29911f8837" containerName="extract-utilities" Dec 06 07:39:24 crc kubenswrapper[4823]: I1206 07:39:24.820932 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="296fc094-d06b-409f-94ce-4e29911f8837" containerName="extract-utilities" Dec 06 07:39:24 crc kubenswrapper[4823]: E1206 07:39:24.820944 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="296fc094-d06b-409f-94ce-4e29911f8837" containerName="registry-server" Dec 06 07:39:24 crc kubenswrapper[4823]: I1206 07:39:24.820951 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="296fc094-d06b-409f-94ce-4e29911f8837" containerName="registry-server" Dec 06 07:39:24 crc kubenswrapper[4823]: I1206 07:39:24.821160 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e5a1474-a36c-4c23-8a43-e3aa06c0fdfe" containerName="registry-server" Dec 06 07:39:24 crc kubenswrapper[4823]: I1206 07:39:24.821191 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="296fc094-d06b-409f-94ce-4e29911f8837" containerName="registry-server" Dec 06 07:39:24 crc kubenswrapper[4823]: I1206 07:39:24.823267 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5tnvt" Dec 06 07:39:24 crc kubenswrapper[4823]: I1206 07:39:24.840049 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5tnvt"] Dec 06 07:39:24 crc kubenswrapper[4823]: I1206 07:39:24.976416 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8aa2000e-0a46-4d7c-b13a-4ae913db9b28-catalog-content\") pod \"community-operators-5tnvt\" (UID: \"8aa2000e-0a46-4d7c-b13a-4ae913db9b28\") " pod="openshift-marketplace/community-operators-5tnvt" Dec 06 07:39:24 crc kubenswrapper[4823]: I1206 07:39:24.976847 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8aa2000e-0a46-4d7c-b13a-4ae913db9b28-utilities\") pod \"community-operators-5tnvt\" (UID: \"8aa2000e-0a46-4d7c-b13a-4ae913db9b28\") " pod="openshift-marketplace/community-operators-5tnvt" Dec 06 07:39:24 crc kubenswrapper[4823]: I1206 07:39:24.976888 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcf77\" (UniqueName: \"kubernetes.io/projected/8aa2000e-0a46-4d7c-b13a-4ae913db9b28-kube-api-access-mcf77\") pod \"community-operators-5tnvt\" (UID: \"8aa2000e-0a46-4d7c-b13a-4ae913db9b28\") " pod="openshift-marketplace/community-operators-5tnvt" Dec 06 07:39:25 crc kubenswrapper[4823]: I1206 07:39:25.078745 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8aa2000e-0a46-4d7c-b13a-4ae913db9b28-utilities\") pod \"community-operators-5tnvt\" (UID: \"8aa2000e-0a46-4d7c-b13a-4ae913db9b28\") " pod="openshift-marketplace/community-operators-5tnvt" Dec 06 07:39:25 crc kubenswrapper[4823]: I1206 07:39:25.078795 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcf77\" (UniqueName: \"kubernetes.io/projected/8aa2000e-0a46-4d7c-b13a-4ae913db9b28-kube-api-access-mcf77\") pod \"community-operators-5tnvt\" (UID: \"8aa2000e-0a46-4d7c-b13a-4ae913db9b28\") " pod="openshift-marketplace/community-operators-5tnvt" Dec 06 07:39:25 crc kubenswrapper[4823]: I1206 07:39:25.078865 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8aa2000e-0a46-4d7c-b13a-4ae913db9b28-catalog-content\") pod \"community-operators-5tnvt\" (UID: \"8aa2000e-0a46-4d7c-b13a-4ae913db9b28\") " pod="openshift-marketplace/community-operators-5tnvt" Dec 06 07:39:25 crc kubenswrapper[4823]: I1206 07:39:25.079365 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8aa2000e-0a46-4d7c-b13a-4ae913db9b28-catalog-content\") pod \"community-operators-5tnvt\" (UID: \"8aa2000e-0a46-4d7c-b13a-4ae913db9b28\") " pod="openshift-marketplace/community-operators-5tnvt" Dec 06 07:39:25 crc kubenswrapper[4823]: I1206 07:39:25.079365 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8aa2000e-0a46-4d7c-b13a-4ae913db9b28-utilities\") pod \"community-operators-5tnvt\" (UID: \"8aa2000e-0a46-4d7c-b13a-4ae913db9b28\") " pod="openshift-marketplace/community-operators-5tnvt" Dec 06 07:39:25 crc kubenswrapper[4823]: I1206 07:39:25.101479 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcf77\" (UniqueName: \"kubernetes.io/projected/8aa2000e-0a46-4d7c-b13a-4ae913db9b28-kube-api-access-mcf77\") pod \"community-operators-5tnvt\" (UID: \"8aa2000e-0a46-4d7c-b13a-4ae913db9b28\") " pod="openshift-marketplace/community-operators-5tnvt" Dec 06 07:39:25 crc kubenswrapper[4823]: I1206 07:39:25.160481 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5tnvt" Dec 06 07:39:25 crc kubenswrapper[4823]: I1206 07:39:25.944737 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5tnvt"] Dec 06 07:39:26 crc kubenswrapper[4823]: I1206 07:39:26.321036 4823 generic.go:334] "Generic (PLEG): container finished" podID="8aa2000e-0a46-4d7c-b13a-4ae913db9b28" containerID="1dad6c0b73a2a70411145f0e2acceec78f25bb8b0e5af11edb7e346d138bc2ee" exitCode=0 Dec 06 07:39:26 crc kubenswrapper[4823]: I1206 07:39:26.321138 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5tnvt" event={"ID":"8aa2000e-0a46-4d7c-b13a-4ae913db9b28","Type":"ContainerDied","Data":"1dad6c0b73a2a70411145f0e2acceec78f25bb8b0e5af11edb7e346d138bc2ee"} Dec 06 07:39:26 crc kubenswrapper[4823]: I1206 07:39:26.321350 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5tnvt" event={"ID":"8aa2000e-0a46-4d7c-b13a-4ae913db9b28","Type":"ContainerStarted","Data":"d3a34afc1bdf5bfe90c5882717ecbcce215bcd251795fdbef3cf76b4d219b915"} Dec 06 07:39:26 crc kubenswrapper[4823]: I1206 07:39:26.323384 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 07:39:32 crc kubenswrapper[4823]: I1206 07:39:32.384919 4823 generic.go:334] "Generic (PLEG): container finished" podID="8aa2000e-0a46-4d7c-b13a-4ae913db9b28" containerID="8e04804a5d6b7e77f16058f3d9ba91dd52cdee1db7e7f114da9b615f1ce3dc05" exitCode=0 Dec 06 07:39:32 crc kubenswrapper[4823]: I1206 07:39:32.385027 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5tnvt" event={"ID":"8aa2000e-0a46-4d7c-b13a-4ae913db9b28","Type":"ContainerDied","Data":"8e04804a5d6b7e77f16058f3d9ba91dd52cdee1db7e7f114da9b615f1ce3dc05"} Dec 06 07:39:34 crc kubenswrapper[4823]: I1206 07:39:34.404381 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5tnvt" event={"ID":"8aa2000e-0a46-4d7c-b13a-4ae913db9b28","Type":"ContainerStarted","Data":"39bb1a75b26d18f74b5f00afd5046ca6d9c64a3149ccdce9453eb2c8f194898d"} Dec 06 07:39:34 crc kubenswrapper[4823]: I1206 07:39:34.430252 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5tnvt" podStartSLOduration=3.860104611 podStartE2EDuration="10.430229333s" podCreationTimestamp="2025-12-06 07:39:24 +0000 UTC" firstStartedPulling="2025-12-06 07:39:26.323060137 +0000 UTC m=+4467.608812097" lastFinishedPulling="2025-12-06 07:39:32.893184859 +0000 UTC m=+4474.178936819" observedRunningTime="2025-12-06 07:39:34.421246034 +0000 UTC m=+4475.706998004" watchObservedRunningTime="2025-12-06 07:39:34.430229333 +0000 UTC m=+4475.715981293" Dec 06 07:39:35 crc kubenswrapper[4823]: I1206 07:39:35.160831 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5tnvt" Dec 06 07:39:35 crc kubenswrapper[4823]: I1206 07:39:35.160880 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5tnvt" Dec 06 07:39:36 crc kubenswrapper[4823]: I1206 07:39:36.212025 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5tnvt" podUID="8aa2000e-0a46-4d7c-b13a-4ae913db9b28" containerName="registry-server" probeResult="failure" output=< Dec 06 07:39:36 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Dec 06 07:39:36 crc kubenswrapper[4823]: > Dec 06 07:39:37 crc kubenswrapper[4823]: I1206 07:39:37.142026 4823 scope.go:117] "RemoveContainer" containerID="3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" Dec 06 07:39:37 crc kubenswrapper[4823]: E1206 07:39:37.142859 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:39:45 crc kubenswrapper[4823]: I1206 07:39:45.222050 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5tnvt" Dec 06 07:39:45 crc kubenswrapper[4823]: I1206 07:39:45.273403 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5tnvt" Dec 06 07:39:45 crc kubenswrapper[4823]: I1206 07:39:45.338622 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5tnvt"] Dec 06 07:39:45 crc kubenswrapper[4823]: I1206 07:39:45.462016 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xhxkq"] Dec 06 07:39:45 crc kubenswrapper[4823]: I1206 07:39:45.462313 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xhxkq" podUID="93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d" containerName="registry-server" containerID="cri-o://c2a927349ea4f8b2770e4c9a27f37c7f0dd45e0f139f96782f0a8b994dbc0ea1" gracePeriod=2 Dec 06 07:39:45 crc kubenswrapper[4823]: I1206 07:39:45.982242 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xhxkq" Dec 06 07:39:46 crc kubenswrapper[4823]: I1206 07:39:46.103177 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d-catalog-content\") pod \"93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d\" (UID: \"93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d\") " Dec 06 07:39:46 crc kubenswrapper[4823]: I1206 07:39:46.103367 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m96wb\" (UniqueName: \"kubernetes.io/projected/93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d-kube-api-access-m96wb\") pod \"93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d\" (UID: \"93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d\") " Dec 06 07:39:46 crc kubenswrapper[4823]: I1206 07:39:46.103465 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d-utilities\") pod \"93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d\" (UID: \"93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d\") " Dec 06 07:39:46 crc kubenswrapper[4823]: I1206 07:39:46.107136 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d-utilities" (OuterVolumeSpecName: "utilities") pod "93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d" (UID: "93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:39:46 crc kubenswrapper[4823]: I1206 07:39:46.124424 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d-kube-api-access-m96wb" (OuterVolumeSpecName: "kube-api-access-m96wb") pod "93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d" (UID: "93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d"). InnerVolumeSpecName "kube-api-access-m96wb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:39:46 crc kubenswrapper[4823]: I1206 07:39:46.197406 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d" (UID: "93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:39:46 crc kubenswrapper[4823]: I1206 07:39:46.205595 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m96wb\" (UniqueName: \"kubernetes.io/projected/93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d-kube-api-access-m96wb\") on node \"crc\" DevicePath \"\"" Dec 06 07:39:46 crc kubenswrapper[4823]: I1206 07:39:46.205851 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:39:46 crc kubenswrapper[4823]: I1206 07:39:46.205969 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:39:46 crc kubenswrapper[4823]: I1206 07:39:46.516583 4823 generic.go:334] "Generic (PLEG): container finished" podID="93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d" containerID="c2a927349ea4f8b2770e4c9a27f37c7f0dd45e0f139f96782f0a8b994dbc0ea1" exitCode=0 Dec 06 07:39:46 crc kubenswrapper[4823]: I1206 07:39:46.516633 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhxkq" event={"ID":"93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d","Type":"ContainerDied","Data":"c2a927349ea4f8b2770e4c9a27f37c7f0dd45e0f139f96782f0a8b994dbc0ea1"} Dec 06 07:39:46 crc kubenswrapper[4823]: I1206 07:39:46.516740 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhxkq" event={"ID":"93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d","Type":"ContainerDied","Data":"19fe9d7c2cfa41a9919d432a2216c5c5a8fe4de3187e79fd06e4bbf089025394"} Dec 06 07:39:46 crc kubenswrapper[4823]: I1206 07:39:46.516764 4823 scope.go:117] "RemoveContainer" containerID="c2a927349ea4f8b2770e4c9a27f37c7f0dd45e0f139f96782f0a8b994dbc0ea1" Dec 06 07:39:46 crc kubenswrapper[4823]: I1206 07:39:46.516677 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xhxkq" Dec 06 07:39:46 crc kubenswrapper[4823]: I1206 07:39:46.555788 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xhxkq"] Dec 06 07:39:46 crc kubenswrapper[4823]: I1206 07:39:46.564604 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xhxkq"] Dec 06 07:39:46 crc kubenswrapper[4823]: I1206 07:39:46.564803 4823 scope.go:117] "RemoveContainer" containerID="cbf74f8a48f468233aab05e5769fd2c4c76ed2c1ac76e1bdbaaff2042b66e36f" Dec 06 07:39:46 crc kubenswrapper[4823]: I1206 07:39:46.587636 4823 scope.go:117] "RemoveContainer" containerID="faafbd989dbe20dc351162489e868eea874f2942cf9a052270c8ed4183cd1b64" Dec 06 07:39:46 crc kubenswrapper[4823]: I1206 07:39:46.656614 4823 scope.go:117] "RemoveContainer" containerID="c2a927349ea4f8b2770e4c9a27f37c7f0dd45e0f139f96782f0a8b994dbc0ea1" Dec 06 07:39:46 crc kubenswrapper[4823]: E1206 07:39:46.657095 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2a927349ea4f8b2770e4c9a27f37c7f0dd45e0f139f96782f0a8b994dbc0ea1\": container with ID starting with c2a927349ea4f8b2770e4c9a27f37c7f0dd45e0f139f96782f0a8b994dbc0ea1 not found: ID does not exist" containerID="c2a927349ea4f8b2770e4c9a27f37c7f0dd45e0f139f96782f0a8b994dbc0ea1" Dec 06 07:39:46 crc kubenswrapper[4823]: I1206 07:39:46.657147 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2a927349ea4f8b2770e4c9a27f37c7f0dd45e0f139f96782f0a8b994dbc0ea1"} err="failed to get container status \"c2a927349ea4f8b2770e4c9a27f37c7f0dd45e0f139f96782f0a8b994dbc0ea1\": rpc error: code = NotFound desc = could not find container \"c2a927349ea4f8b2770e4c9a27f37c7f0dd45e0f139f96782f0a8b994dbc0ea1\": container with ID starting with c2a927349ea4f8b2770e4c9a27f37c7f0dd45e0f139f96782f0a8b994dbc0ea1 not found: ID does not exist" Dec 06 07:39:46 crc kubenswrapper[4823]: I1206 07:39:46.657173 4823 scope.go:117] "RemoveContainer" containerID="cbf74f8a48f468233aab05e5769fd2c4c76ed2c1ac76e1bdbaaff2042b66e36f" Dec 06 07:39:46 crc kubenswrapper[4823]: E1206 07:39:46.657615 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbf74f8a48f468233aab05e5769fd2c4c76ed2c1ac76e1bdbaaff2042b66e36f\": container with ID starting with cbf74f8a48f468233aab05e5769fd2c4c76ed2c1ac76e1bdbaaff2042b66e36f not found: ID does not exist" containerID="cbf74f8a48f468233aab05e5769fd2c4c76ed2c1ac76e1bdbaaff2042b66e36f" Dec 06 07:39:46 crc kubenswrapper[4823]: I1206 07:39:46.657646 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbf74f8a48f468233aab05e5769fd2c4c76ed2c1ac76e1bdbaaff2042b66e36f"} err="failed to get container status \"cbf74f8a48f468233aab05e5769fd2c4c76ed2c1ac76e1bdbaaff2042b66e36f\": rpc error: code = NotFound desc = could not find container \"cbf74f8a48f468233aab05e5769fd2c4c76ed2c1ac76e1bdbaaff2042b66e36f\": container with ID starting with cbf74f8a48f468233aab05e5769fd2c4c76ed2c1ac76e1bdbaaff2042b66e36f not found: ID does not exist" Dec 06 07:39:46 crc kubenswrapper[4823]: I1206 07:39:46.657681 4823 scope.go:117] "RemoveContainer" containerID="faafbd989dbe20dc351162489e868eea874f2942cf9a052270c8ed4183cd1b64" Dec 06 07:39:46 crc kubenswrapper[4823]: E1206 07:39:46.658039 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faafbd989dbe20dc351162489e868eea874f2942cf9a052270c8ed4183cd1b64\": container with ID starting with faafbd989dbe20dc351162489e868eea874f2942cf9a052270c8ed4183cd1b64 not found: ID does not exist" containerID="faafbd989dbe20dc351162489e868eea874f2942cf9a052270c8ed4183cd1b64" Dec 06 07:39:46 crc kubenswrapper[4823]: I1206 07:39:46.658075 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faafbd989dbe20dc351162489e868eea874f2942cf9a052270c8ed4183cd1b64"} err="failed to get container status \"faafbd989dbe20dc351162489e868eea874f2942cf9a052270c8ed4183cd1b64\": rpc error: code = NotFound desc = could not find container \"faafbd989dbe20dc351162489e868eea874f2942cf9a052270c8ed4183cd1b64\": container with ID starting with faafbd989dbe20dc351162489e868eea874f2942cf9a052270c8ed4183cd1b64 not found: ID does not exist" Dec 06 07:39:47 crc kubenswrapper[4823]: I1206 07:39:47.152018 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d" path="/var/lib/kubelet/pods/93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d/volumes" Dec 06 07:39:50 crc kubenswrapper[4823]: I1206 07:39:50.141151 4823 scope.go:117] "RemoveContainer" containerID="3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" Dec 06 07:39:50 crc kubenswrapper[4823]: E1206 07:39:50.141783 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:40:01 crc kubenswrapper[4823]: I1206 07:40:01.174628 4823 scope.go:117] "RemoveContainer" containerID="3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" Dec 06 07:40:01 crc kubenswrapper[4823]: E1206 07:40:01.175334 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:40:13 crc kubenswrapper[4823]: I1206 07:40:13.141515 4823 scope.go:117] "RemoveContainer" containerID="3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" Dec 06 07:40:13 crc kubenswrapper[4823]: E1206 07:40:13.142364 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:40:28 crc kubenswrapper[4823]: I1206 07:40:28.141360 4823 scope.go:117] "RemoveContainer" containerID="3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" Dec 06 07:40:28 crc kubenswrapper[4823]: E1206 07:40:28.142285 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:40:40 crc kubenswrapper[4823]: I1206 07:40:40.141699 4823 scope.go:117] "RemoveContainer" containerID="3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" Dec 06 07:40:40 crc kubenswrapper[4823]: E1206 07:40:40.142344 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:40:54 crc kubenswrapper[4823]: I1206 07:40:54.141471 4823 scope.go:117] "RemoveContainer" containerID="3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" Dec 06 07:40:54 crc kubenswrapper[4823]: E1206 07:40:54.143068 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:41:09 crc kubenswrapper[4823]: I1206 07:41:09.147509 4823 scope.go:117] "RemoveContainer" containerID="3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" Dec 06 07:41:09 crc kubenswrapper[4823]: E1206 07:41:09.148821 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:41:22 crc kubenswrapper[4823]: I1206 07:41:22.140216 4823 scope.go:117] "RemoveContainer" containerID="3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" Dec 06 07:41:22 crc kubenswrapper[4823]: E1206 07:41:22.141164 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:41:36 crc kubenswrapper[4823]: I1206 07:41:36.140602 4823 scope.go:117] "RemoveContainer" containerID="3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" Dec 06 07:41:36 crc kubenswrapper[4823]: E1206 07:41:36.141329 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:41:44 crc kubenswrapper[4823]: I1206 07:41:44.891992 4823 scope.go:117] "RemoveContainer" containerID="75b2722d7b8db30e4af220a76942ae12cc260340400a411e2d4762031476ddf8" Dec 06 07:41:44 crc kubenswrapper[4823]: I1206 07:41:44.918085 4823 scope.go:117] "RemoveContainer" containerID="8c6e205ec67af27f0c152cdf5b23df11fc24527348f71d155ade66e729d7d4fb" Dec 06 07:41:44 crc kubenswrapper[4823]: I1206 07:41:44.961384 4823 scope.go:117] "RemoveContainer" containerID="7b0f44229f3e0877eb649d20046d5268553671b7f63d132989920884dbb43880" Dec 06 07:41:44 crc kubenswrapper[4823]: I1206 07:41:44.985289 4823 scope.go:117] "RemoveContainer" containerID="9e916811e02f9ddd51a6823f156294cb041d454aaa86f985321c53cb103abd41" Dec 06 07:41:45 crc kubenswrapper[4823]: I1206 07:41:45.031271 4823 scope.go:117] "RemoveContainer" containerID="66f467687de751dce4f8045cd10b57120d96e47834c93bca54cc4a6ef728316c" Dec 06 07:41:45 crc kubenswrapper[4823]: I1206 07:41:45.078547 4823 scope.go:117] "RemoveContainer" containerID="657b0897b87240a6d2d83c2907268713a218221960ac5e27b1f1cd488855532c" Dec 06 07:41:49 crc kubenswrapper[4823]: I1206 07:41:49.146461 4823 scope.go:117] "RemoveContainer" containerID="3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" Dec 06 07:41:49 crc kubenswrapper[4823]: E1206 07:41:49.147257 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:42:04 crc kubenswrapper[4823]: I1206 07:42:04.140973 4823 scope.go:117] "RemoveContainer" containerID="3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" Dec 06 07:42:04 crc kubenswrapper[4823]: E1206 07:42:04.141896 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:42:16 crc kubenswrapper[4823]: I1206 07:42:16.141397 4823 scope.go:117] "RemoveContainer" containerID="3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" Dec 06 07:42:16 crc kubenswrapper[4823]: E1206 07:42:16.142118 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:42:30 crc kubenswrapper[4823]: I1206 07:42:30.140874 4823 scope.go:117] "RemoveContainer" containerID="3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" Dec 06 07:42:30 crc kubenswrapper[4823]: E1206 07:42:30.141744 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:42:41 crc kubenswrapper[4823]: I1206 07:42:41.140960 4823 scope.go:117] "RemoveContainer" containerID="3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" Dec 06 07:42:42 crc kubenswrapper[4823]: I1206 07:42:42.289613 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerStarted","Data":"d406098cecd6a771940967f9978caf5dafb2641fab3e2c540d0dc94f6bb77259"} Dec 06 07:43:29 crc kubenswrapper[4823]: I1206 07:43:29.434110 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8c79m"] Dec 06 07:43:29 crc kubenswrapper[4823]: E1206 07:43:29.435409 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d" containerName="extract-utilities" Dec 06 07:43:29 crc kubenswrapper[4823]: I1206 07:43:29.435434 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d" containerName="extract-utilities" Dec 06 07:43:29 crc kubenswrapper[4823]: E1206 07:43:29.435466 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d" containerName="extract-content" Dec 06 07:43:29 crc kubenswrapper[4823]: I1206 07:43:29.435477 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d" containerName="extract-content" Dec 06 07:43:29 crc kubenswrapper[4823]: E1206 07:43:29.435534 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d" containerName="registry-server" Dec 06 07:43:29 crc kubenswrapper[4823]: I1206 07:43:29.435548 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d" containerName="registry-server" Dec 06 07:43:29 crc kubenswrapper[4823]: I1206 07:43:29.435970 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="93e23ba9-45c7-4ce3-bb76-cbf4b4414f9d" containerName="registry-server" Dec 06 07:43:29 crc kubenswrapper[4823]: I1206 07:43:29.438829 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8c79m" Dec 06 07:43:29 crc kubenswrapper[4823]: I1206 07:43:29.446521 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8c79m"] Dec 06 07:43:29 crc kubenswrapper[4823]: I1206 07:43:29.531247 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7e12207-5f52-4cb9-a26b-46e64cc548f3-utilities\") pod \"redhat-operators-8c79m\" (UID: \"d7e12207-5f52-4cb9-a26b-46e64cc548f3\") " pod="openshift-marketplace/redhat-operators-8c79m" Dec 06 07:43:29 crc kubenswrapper[4823]: I1206 07:43:29.531311 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7e12207-5f52-4cb9-a26b-46e64cc548f3-catalog-content\") pod \"redhat-operators-8c79m\" (UID: \"d7e12207-5f52-4cb9-a26b-46e64cc548f3\") " pod="openshift-marketplace/redhat-operators-8c79m" Dec 06 07:43:29 crc kubenswrapper[4823]: I1206 07:43:29.531338 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5bbz\" (UniqueName: \"kubernetes.io/projected/d7e12207-5f52-4cb9-a26b-46e64cc548f3-kube-api-access-h5bbz\") pod \"redhat-operators-8c79m\" (UID: \"d7e12207-5f52-4cb9-a26b-46e64cc548f3\") " pod="openshift-marketplace/redhat-operators-8c79m" Dec 06 07:43:29 crc kubenswrapper[4823]: I1206 07:43:29.633788 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7e12207-5f52-4cb9-a26b-46e64cc548f3-utilities\") pod \"redhat-operators-8c79m\" (UID: \"d7e12207-5f52-4cb9-a26b-46e64cc548f3\") " pod="openshift-marketplace/redhat-operators-8c79m" Dec 06 07:43:29 crc kubenswrapper[4823]: I1206 07:43:29.634176 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7e12207-5f52-4cb9-a26b-46e64cc548f3-catalog-content\") pod \"redhat-operators-8c79m\" (UID: \"d7e12207-5f52-4cb9-a26b-46e64cc548f3\") " pod="openshift-marketplace/redhat-operators-8c79m" Dec 06 07:43:29 crc kubenswrapper[4823]: I1206 07:43:29.634220 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5bbz\" (UniqueName: \"kubernetes.io/projected/d7e12207-5f52-4cb9-a26b-46e64cc548f3-kube-api-access-h5bbz\") pod \"redhat-operators-8c79m\" (UID: \"d7e12207-5f52-4cb9-a26b-46e64cc548f3\") " pod="openshift-marketplace/redhat-operators-8c79m" Dec 06 07:43:29 crc kubenswrapper[4823]: I1206 07:43:29.634874 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7e12207-5f52-4cb9-a26b-46e64cc548f3-utilities\") pod \"redhat-operators-8c79m\" (UID: \"d7e12207-5f52-4cb9-a26b-46e64cc548f3\") " pod="openshift-marketplace/redhat-operators-8c79m" Dec 06 07:43:29 crc kubenswrapper[4823]: I1206 07:43:29.634887 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7e12207-5f52-4cb9-a26b-46e64cc548f3-catalog-content\") pod \"redhat-operators-8c79m\" (UID: \"d7e12207-5f52-4cb9-a26b-46e64cc548f3\") " pod="openshift-marketplace/redhat-operators-8c79m" Dec 06 07:43:29 crc kubenswrapper[4823]: I1206 07:43:29.657723 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5bbz\" (UniqueName: \"kubernetes.io/projected/d7e12207-5f52-4cb9-a26b-46e64cc548f3-kube-api-access-h5bbz\") pod \"redhat-operators-8c79m\" (UID: \"d7e12207-5f52-4cb9-a26b-46e64cc548f3\") " pod="openshift-marketplace/redhat-operators-8c79m" Dec 06 07:43:29 crc kubenswrapper[4823]: I1206 07:43:29.784192 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8c79m" Dec 06 07:43:30 crc kubenswrapper[4823]: I1206 07:43:30.319499 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8c79m"] Dec 06 07:43:30 crc kubenswrapper[4823]: I1206 07:43:30.722389 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8c79m" event={"ID":"d7e12207-5f52-4cb9-a26b-46e64cc548f3","Type":"ContainerStarted","Data":"f45636054a54acf585c1da0f6610c90de08faa0651269a7b47c3b8d9a51dfa3e"} Dec 06 07:43:31 crc kubenswrapper[4823]: I1206 07:43:31.738379 4823 generic.go:334] "Generic (PLEG): container finished" podID="d7e12207-5f52-4cb9-a26b-46e64cc548f3" containerID="24274b06c1f3627a7ae9f238bb1f7f77cf4974100b1516cda8b7e4943677c1d1" exitCode=0 Dec 06 07:43:31 crc kubenswrapper[4823]: I1206 07:43:31.738509 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8c79m" event={"ID":"d7e12207-5f52-4cb9-a26b-46e64cc548f3","Type":"ContainerDied","Data":"24274b06c1f3627a7ae9f238bb1f7f77cf4974100b1516cda8b7e4943677c1d1"} Dec 06 07:43:33 crc kubenswrapper[4823]: I1206 07:43:33.770030 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8c79m" event={"ID":"d7e12207-5f52-4cb9-a26b-46e64cc548f3","Type":"ContainerStarted","Data":"6e03340eebd21acab7e4fd3672697a99ae64a44282ab51a95878e61ecf744db7"} Dec 06 07:43:35 crc kubenswrapper[4823]: I1206 07:43:35.791563 4823 generic.go:334] "Generic (PLEG): container finished" podID="d7e12207-5f52-4cb9-a26b-46e64cc548f3" containerID="6e03340eebd21acab7e4fd3672697a99ae64a44282ab51a95878e61ecf744db7" exitCode=0 Dec 06 07:43:35 crc kubenswrapper[4823]: I1206 07:43:35.791628 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8c79m" event={"ID":"d7e12207-5f52-4cb9-a26b-46e64cc548f3","Type":"ContainerDied","Data":"6e03340eebd21acab7e4fd3672697a99ae64a44282ab51a95878e61ecf744db7"} Dec 06 07:43:37 crc kubenswrapper[4823]: I1206 07:43:37.812451 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8c79m" event={"ID":"d7e12207-5f52-4cb9-a26b-46e64cc548f3","Type":"ContainerStarted","Data":"eb96d505ae90a24a59f7351b4a5b3aa6152b3406e5dc26ec33d6c0087de85663"} Dec 06 07:43:37 crc kubenswrapper[4823]: I1206 07:43:37.833741 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8c79m" podStartSLOduration=4.407947324 podStartE2EDuration="8.833722049s" podCreationTimestamp="2025-12-06 07:43:29 +0000 UTC" firstStartedPulling="2025-12-06 07:43:31.740465768 +0000 UTC m=+4713.026217728" lastFinishedPulling="2025-12-06 07:43:36.166240493 +0000 UTC m=+4717.451992453" observedRunningTime="2025-12-06 07:43:37.830829045 +0000 UTC m=+4719.116581015" watchObservedRunningTime="2025-12-06 07:43:37.833722049 +0000 UTC m=+4719.119474009" Dec 06 07:43:39 crc kubenswrapper[4823]: I1206 07:43:39.785413 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8c79m" Dec 06 07:43:39 crc kubenswrapper[4823]: I1206 07:43:39.786481 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8c79m" Dec 06 07:43:40 crc kubenswrapper[4823]: I1206 07:43:40.840245 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8c79m" podUID="d7e12207-5f52-4cb9-a26b-46e64cc548f3" containerName="registry-server" probeResult="failure" output=< Dec 06 07:43:40 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Dec 06 07:43:40 crc kubenswrapper[4823]: > Dec 06 07:43:49 crc kubenswrapper[4823]: I1206 07:43:49.832042 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8c79m" Dec 06 07:43:49 crc kubenswrapper[4823]: I1206 07:43:49.884967 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8c79m" Dec 06 07:43:50 crc kubenswrapper[4823]: I1206 07:43:50.076370 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8c79m"] Dec 06 07:43:50 crc kubenswrapper[4823]: I1206 07:43:50.932606 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8c79m" podUID="d7e12207-5f52-4cb9-a26b-46e64cc548f3" containerName="registry-server" containerID="cri-o://eb96d505ae90a24a59f7351b4a5b3aa6152b3406e5dc26ec33d6c0087de85663" gracePeriod=2 Dec 06 07:43:51 crc kubenswrapper[4823]: I1206 07:43:51.395783 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8c79m" Dec 06 07:43:51 crc kubenswrapper[4823]: I1206 07:43:51.471088 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7e12207-5f52-4cb9-a26b-46e64cc548f3-catalog-content\") pod \"d7e12207-5f52-4cb9-a26b-46e64cc548f3\" (UID: \"d7e12207-5f52-4cb9-a26b-46e64cc548f3\") " Dec 06 07:43:51 crc kubenswrapper[4823]: I1206 07:43:51.471288 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5bbz\" (UniqueName: \"kubernetes.io/projected/d7e12207-5f52-4cb9-a26b-46e64cc548f3-kube-api-access-h5bbz\") pod \"d7e12207-5f52-4cb9-a26b-46e64cc548f3\" (UID: \"d7e12207-5f52-4cb9-a26b-46e64cc548f3\") " Dec 06 07:43:51 crc kubenswrapper[4823]: I1206 07:43:51.472565 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7e12207-5f52-4cb9-a26b-46e64cc548f3-utilities\") pod \"d7e12207-5f52-4cb9-a26b-46e64cc548f3\" (UID: \"d7e12207-5f52-4cb9-a26b-46e64cc548f3\") " Dec 06 07:43:51 crc kubenswrapper[4823]: I1206 07:43:51.473374 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7e12207-5f52-4cb9-a26b-46e64cc548f3-utilities" (OuterVolumeSpecName: "utilities") pod "d7e12207-5f52-4cb9-a26b-46e64cc548f3" (UID: "d7e12207-5f52-4cb9-a26b-46e64cc548f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:43:51 crc kubenswrapper[4823]: I1206 07:43:51.474547 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7e12207-5f52-4cb9-a26b-46e64cc548f3-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:43:51 crc kubenswrapper[4823]: I1206 07:43:51.477380 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e12207-5f52-4cb9-a26b-46e64cc548f3-kube-api-access-h5bbz" (OuterVolumeSpecName: "kube-api-access-h5bbz") pod "d7e12207-5f52-4cb9-a26b-46e64cc548f3" (UID: "d7e12207-5f52-4cb9-a26b-46e64cc548f3"). InnerVolumeSpecName "kube-api-access-h5bbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:43:51 crc kubenswrapper[4823]: I1206 07:43:51.576559 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5bbz\" (UniqueName: \"kubernetes.io/projected/d7e12207-5f52-4cb9-a26b-46e64cc548f3-kube-api-access-h5bbz\") on node \"crc\" DevicePath \"\"" Dec 06 07:43:51 crc kubenswrapper[4823]: I1206 07:43:51.580913 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7e12207-5f52-4cb9-a26b-46e64cc548f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7e12207-5f52-4cb9-a26b-46e64cc548f3" (UID: "d7e12207-5f52-4cb9-a26b-46e64cc548f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:43:51 crc kubenswrapper[4823]: I1206 07:43:51.678764 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7e12207-5f52-4cb9-a26b-46e64cc548f3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:43:51 crc kubenswrapper[4823]: I1206 07:43:51.944911 4823 generic.go:334] "Generic (PLEG): container finished" podID="d7e12207-5f52-4cb9-a26b-46e64cc548f3" containerID="eb96d505ae90a24a59f7351b4a5b3aa6152b3406e5dc26ec33d6c0087de85663" exitCode=0 Dec 06 07:43:51 crc kubenswrapper[4823]: I1206 07:43:51.944963 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8c79m" event={"ID":"d7e12207-5f52-4cb9-a26b-46e64cc548f3","Type":"ContainerDied","Data":"eb96d505ae90a24a59f7351b4a5b3aa6152b3406e5dc26ec33d6c0087de85663"} Dec 06 07:43:51 crc kubenswrapper[4823]: I1206 07:43:51.944991 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8c79m" Dec 06 07:43:51 crc kubenswrapper[4823]: I1206 07:43:51.945005 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8c79m" event={"ID":"d7e12207-5f52-4cb9-a26b-46e64cc548f3","Type":"ContainerDied","Data":"f45636054a54acf585c1da0f6610c90de08faa0651269a7b47c3b8d9a51dfa3e"} Dec 06 07:43:51 crc kubenswrapper[4823]: I1206 07:43:51.945028 4823 scope.go:117] "RemoveContainer" containerID="eb96d505ae90a24a59f7351b4a5b3aa6152b3406e5dc26ec33d6c0087de85663" Dec 06 07:43:51 crc kubenswrapper[4823]: I1206 07:43:51.972355 4823 scope.go:117] "RemoveContainer" containerID="6e03340eebd21acab7e4fd3672697a99ae64a44282ab51a95878e61ecf744db7" Dec 06 07:43:51 crc kubenswrapper[4823]: I1206 07:43:51.992149 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8c79m"] Dec 06 07:43:52 crc kubenswrapper[4823]: I1206 07:43:52.004427 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8c79m"] Dec 06 07:43:52 crc kubenswrapper[4823]: I1206 07:43:52.007830 4823 scope.go:117] "RemoveContainer" containerID="24274b06c1f3627a7ae9f238bb1f7f77cf4974100b1516cda8b7e4943677c1d1" Dec 06 07:43:52 crc kubenswrapper[4823]: I1206 07:43:52.060379 4823 scope.go:117] "RemoveContainer" containerID="eb96d505ae90a24a59f7351b4a5b3aa6152b3406e5dc26ec33d6c0087de85663" Dec 06 07:43:52 crc kubenswrapper[4823]: E1206 07:43:52.060932 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb96d505ae90a24a59f7351b4a5b3aa6152b3406e5dc26ec33d6c0087de85663\": container with ID starting with eb96d505ae90a24a59f7351b4a5b3aa6152b3406e5dc26ec33d6c0087de85663 not found: ID does not exist" containerID="eb96d505ae90a24a59f7351b4a5b3aa6152b3406e5dc26ec33d6c0087de85663" Dec 06 07:43:52 crc kubenswrapper[4823]: I1206 07:43:52.060989 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb96d505ae90a24a59f7351b4a5b3aa6152b3406e5dc26ec33d6c0087de85663"} err="failed to get container status \"eb96d505ae90a24a59f7351b4a5b3aa6152b3406e5dc26ec33d6c0087de85663\": rpc error: code = NotFound desc = could not find container \"eb96d505ae90a24a59f7351b4a5b3aa6152b3406e5dc26ec33d6c0087de85663\": container with ID starting with eb96d505ae90a24a59f7351b4a5b3aa6152b3406e5dc26ec33d6c0087de85663 not found: ID does not exist" Dec 06 07:43:52 crc kubenswrapper[4823]: I1206 07:43:52.061035 4823 scope.go:117] "RemoveContainer" containerID="6e03340eebd21acab7e4fd3672697a99ae64a44282ab51a95878e61ecf744db7" Dec 06 07:43:52 crc kubenswrapper[4823]: E1206 07:43:52.061423 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e03340eebd21acab7e4fd3672697a99ae64a44282ab51a95878e61ecf744db7\": container with ID starting with 6e03340eebd21acab7e4fd3672697a99ae64a44282ab51a95878e61ecf744db7 not found: ID does not exist" containerID="6e03340eebd21acab7e4fd3672697a99ae64a44282ab51a95878e61ecf744db7" Dec 06 07:43:52 crc kubenswrapper[4823]: I1206 07:43:52.061455 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e03340eebd21acab7e4fd3672697a99ae64a44282ab51a95878e61ecf744db7"} err="failed to get container status \"6e03340eebd21acab7e4fd3672697a99ae64a44282ab51a95878e61ecf744db7\": rpc error: code = NotFound desc = could not find container \"6e03340eebd21acab7e4fd3672697a99ae64a44282ab51a95878e61ecf744db7\": container with ID starting with 6e03340eebd21acab7e4fd3672697a99ae64a44282ab51a95878e61ecf744db7 not found: ID does not exist" Dec 06 07:43:52 crc kubenswrapper[4823]: I1206 07:43:52.061479 4823 scope.go:117] "RemoveContainer" containerID="24274b06c1f3627a7ae9f238bb1f7f77cf4974100b1516cda8b7e4943677c1d1" Dec 06 07:43:52 crc kubenswrapper[4823]: E1206 07:43:52.061888 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24274b06c1f3627a7ae9f238bb1f7f77cf4974100b1516cda8b7e4943677c1d1\": container with ID starting with 24274b06c1f3627a7ae9f238bb1f7f77cf4974100b1516cda8b7e4943677c1d1 not found: ID does not exist" containerID="24274b06c1f3627a7ae9f238bb1f7f77cf4974100b1516cda8b7e4943677c1d1" Dec 06 07:43:52 crc kubenswrapper[4823]: I1206 07:43:52.061914 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24274b06c1f3627a7ae9f238bb1f7f77cf4974100b1516cda8b7e4943677c1d1"} err="failed to get container status \"24274b06c1f3627a7ae9f238bb1f7f77cf4974100b1516cda8b7e4943677c1d1\": rpc error: code = NotFound desc = could not find container \"24274b06c1f3627a7ae9f238bb1f7f77cf4974100b1516cda8b7e4943677c1d1\": container with ID starting with 24274b06c1f3627a7ae9f238bb1f7f77cf4974100b1516cda8b7e4943677c1d1 not found: ID does not exist" Dec 06 07:43:53 crc kubenswrapper[4823]: I1206 07:43:53.174257 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e12207-5f52-4cb9-a26b-46e64cc548f3" path="/var/lib/kubelet/pods/d7e12207-5f52-4cb9-a26b-46e64cc548f3/volumes" Dec 06 07:45:00 crc kubenswrapper[4823]: I1206 07:45:00.159501 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416785-mm6zz"] Dec 06 07:45:00 crc kubenswrapper[4823]: E1206 07:45:00.160625 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e12207-5f52-4cb9-a26b-46e64cc548f3" containerName="extract-content" Dec 06 07:45:00 crc kubenswrapper[4823]: I1206 07:45:00.160644 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e12207-5f52-4cb9-a26b-46e64cc548f3" containerName="extract-content" Dec 06 07:45:00 crc kubenswrapper[4823]: E1206 07:45:00.160744 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e12207-5f52-4cb9-a26b-46e64cc548f3" containerName="registry-server" Dec 06 07:45:00 crc kubenswrapper[4823]: I1206 07:45:00.160757 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e12207-5f52-4cb9-a26b-46e64cc548f3" containerName="registry-server" Dec 06 07:45:00 crc kubenswrapper[4823]: E1206 07:45:00.160780 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e12207-5f52-4cb9-a26b-46e64cc548f3" containerName="extract-utilities" Dec 06 07:45:00 crc kubenswrapper[4823]: I1206 07:45:00.160790 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e12207-5f52-4cb9-a26b-46e64cc548f3" containerName="extract-utilities" Dec 06 07:45:00 crc kubenswrapper[4823]: I1206 07:45:00.161042 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7e12207-5f52-4cb9-a26b-46e64cc548f3" containerName="registry-server" Dec 06 07:45:00 crc kubenswrapper[4823]: I1206 07:45:00.162418 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-mm6zz" Dec 06 07:45:00 crc kubenswrapper[4823]: I1206 07:45:00.165218 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 07:45:00 crc kubenswrapper[4823]: I1206 07:45:00.165866 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 07:45:00 crc kubenswrapper[4823]: I1206 07:45:00.169862 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416785-mm6zz"] Dec 06 07:45:00 crc kubenswrapper[4823]: I1206 07:45:00.315980 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec582c83-c73c-43b7-853e-0bea57adffdc-config-volume\") pod \"collect-profiles-29416785-mm6zz\" (UID: \"ec582c83-c73c-43b7-853e-0bea57adffdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-mm6zz" Dec 06 07:45:00 crc kubenswrapper[4823]: I1206 07:45:00.316031 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec582c83-c73c-43b7-853e-0bea57adffdc-secret-volume\") pod \"collect-profiles-29416785-mm6zz\" (UID: \"ec582c83-c73c-43b7-853e-0bea57adffdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-mm6zz" Dec 06 07:45:00 crc kubenswrapper[4823]: I1206 07:45:00.316099 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lht6\" (UniqueName: \"kubernetes.io/projected/ec582c83-c73c-43b7-853e-0bea57adffdc-kube-api-access-4lht6\") pod \"collect-profiles-29416785-mm6zz\" (UID: \"ec582c83-c73c-43b7-853e-0bea57adffdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-mm6zz" Dec 06 07:45:00 crc kubenswrapper[4823]: I1206 07:45:00.418057 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec582c83-c73c-43b7-853e-0bea57adffdc-config-volume\") pod \"collect-profiles-29416785-mm6zz\" (UID: \"ec582c83-c73c-43b7-853e-0bea57adffdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-mm6zz" Dec 06 07:45:00 crc kubenswrapper[4823]: I1206 07:45:00.418128 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec582c83-c73c-43b7-853e-0bea57adffdc-secret-volume\") pod \"collect-profiles-29416785-mm6zz\" (UID: \"ec582c83-c73c-43b7-853e-0bea57adffdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-mm6zz" Dec 06 07:45:00 crc kubenswrapper[4823]: I1206 07:45:00.418205 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lht6\" (UniqueName: \"kubernetes.io/projected/ec582c83-c73c-43b7-853e-0bea57adffdc-kube-api-access-4lht6\") pod \"collect-profiles-29416785-mm6zz\" (UID: \"ec582c83-c73c-43b7-853e-0bea57adffdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-mm6zz" Dec 06 07:45:00 crc kubenswrapper[4823]: I1206 07:45:00.419038 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec582c83-c73c-43b7-853e-0bea57adffdc-config-volume\") pod \"collect-profiles-29416785-mm6zz\" (UID: \"ec582c83-c73c-43b7-853e-0bea57adffdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-mm6zz" Dec 06 07:45:00 crc kubenswrapper[4823]: I1206 07:45:00.431284 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec582c83-c73c-43b7-853e-0bea57adffdc-secret-volume\") pod \"collect-profiles-29416785-mm6zz\" (UID: \"ec582c83-c73c-43b7-853e-0bea57adffdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-mm6zz" Dec 06 07:45:00 crc kubenswrapper[4823]: I1206 07:45:00.435063 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lht6\" (UniqueName: \"kubernetes.io/projected/ec582c83-c73c-43b7-853e-0bea57adffdc-kube-api-access-4lht6\") pod \"collect-profiles-29416785-mm6zz\" (UID: \"ec582c83-c73c-43b7-853e-0bea57adffdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-mm6zz" Dec 06 07:45:00 crc kubenswrapper[4823]: I1206 07:45:00.487438 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-mm6zz" Dec 06 07:45:00 crc kubenswrapper[4823]: I1206 07:45:00.936034 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416785-mm6zz"] Dec 06 07:45:01 crc kubenswrapper[4823]: I1206 07:45:01.618238 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-mm6zz" event={"ID":"ec582c83-c73c-43b7-853e-0bea57adffdc","Type":"ContainerStarted","Data":"331d3d9e87e6c190347da4944466ab3f49da31c8c8ba8d4d967c8fcf74e156ce"} Dec 06 07:45:01 crc kubenswrapper[4823]: I1206 07:45:01.618845 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-mm6zz" event={"ID":"ec582c83-c73c-43b7-853e-0bea57adffdc","Type":"ContainerStarted","Data":"503df4505ca2202ec3fd69e78f7af14c1b27b4027a2b0326d65ed53e6d1d76dc"} Dec 06 07:45:01 crc kubenswrapper[4823]: I1206 07:45:01.647609 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-mm6zz" podStartSLOduration=1.647592177 podStartE2EDuration="1.647592177s" podCreationTimestamp="2025-12-06 07:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:45:01.642280924 +0000 UTC m=+4802.928032894" watchObservedRunningTime="2025-12-06 07:45:01.647592177 +0000 UTC m=+4802.933344127" Dec 06 07:45:02 crc kubenswrapper[4823]: I1206 07:45:02.629925 4823 generic.go:334] "Generic (PLEG): container finished" podID="ec582c83-c73c-43b7-853e-0bea57adffdc" containerID="331d3d9e87e6c190347da4944466ab3f49da31c8c8ba8d4d967c8fcf74e156ce" exitCode=0 Dec 06 07:45:02 crc kubenswrapper[4823]: I1206 07:45:02.629975 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-mm6zz" event={"ID":"ec582c83-c73c-43b7-853e-0bea57adffdc","Type":"ContainerDied","Data":"331d3d9e87e6c190347da4944466ab3f49da31c8c8ba8d4d967c8fcf74e156ce"} Dec 06 07:45:03 crc kubenswrapper[4823]: I1206 07:45:03.991364 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-mm6zz" Dec 06 07:45:04 crc kubenswrapper[4823]: I1206 07:45:04.094798 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec582c83-c73c-43b7-853e-0bea57adffdc-secret-volume\") pod \"ec582c83-c73c-43b7-853e-0bea57adffdc\" (UID: \"ec582c83-c73c-43b7-853e-0bea57adffdc\") " Dec 06 07:45:04 crc kubenswrapper[4823]: I1206 07:45:04.094848 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec582c83-c73c-43b7-853e-0bea57adffdc-config-volume\") pod \"ec582c83-c73c-43b7-853e-0bea57adffdc\" (UID: \"ec582c83-c73c-43b7-853e-0bea57adffdc\") " Dec 06 07:45:04 crc kubenswrapper[4823]: I1206 07:45:04.094970 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lht6\" (UniqueName: \"kubernetes.io/projected/ec582c83-c73c-43b7-853e-0bea57adffdc-kube-api-access-4lht6\") pod \"ec582c83-c73c-43b7-853e-0bea57adffdc\" (UID: \"ec582c83-c73c-43b7-853e-0bea57adffdc\") " Dec 06 07:45:04 crc kubenswrapper[4823]: I1206 07:45:04.095677 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec582c83-c73c-43b7-853e-0bea57adffdc-config-volume" (OuterVolumeSpecName: "config-volume") pod "ec582c83-c73c-43b7-853e-0bea57adffdc" (UID: "ec582c83-c73c-43b7-853e-0bea57adffdc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:45:04 crc kubenswrapper[4823]: I1206 07:45:04.101229 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec582c83-c73c-43b7-853e-0bea57adffdc-kube-api-access-4lht6" (OuterVolumeSpecName: "kube-api-access-4lht6") pod "ec582c83-c73c-43b7-853e-0bea57adffdc" (UID: "ec582c83-c73c-43b7-853e-0bea57adffdc"). InnerVolumeSpecName "kube-api-access-4lht6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:45:04 crc kubenswrapper[4823]: I1206 07:45:04.102926 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec582c83-c73c-43b7-853e-0bea57adffdc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ec582c83-c73c-43b7-853e-0bea57adffdc" (UID: "ec582c83-c73c-43b7-853e-0bea57adffdc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:45:04 crc kubenswrapper[4823]: I1206 07:45:04.197180 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec582c83-c73c-43b7-853e-0bea57adffdc-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 07:45:04 crc kubenswrapper[4823]: I1206 07:45:04.197222 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec582c83-c73c-43b7-853e-0bea57adffdc-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 07:45:04 crc kubenswrapper[4823]: I1206 07:45:04.197233 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lht6\" (UniqueName: \"kubernetes.io/projected/ec582c83-c73c-43b7-853e-0bea57adffdc-kube-api-access-4lht6\") on node \"crc\" DevicePath \"\"" Dec 06 07:45:04 crc kubenswrapper[4823]: I1206 07:45:04.651392 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-mm6zz" event={"ID":"ec582c83-c73c-43b7-853e-0bea57adffdc","Type":"ContainerDied","Data":"503df4505ca2202ec3fd69e78f7af14c1b27b4027a2b0326d65ed53e6d1d76dc"} Dec 06 07:45:04 crc kubenswrapper[4823]: I1206 07:45:04.651710 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="503df4505ca2202ec3fd69e78f7af14c1b27b4027a2b0326d65ed53e6d1d76dc" Dec 06 07:45:04 crc kubenswrapper[4823]: I1206 07:45:04.651450 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416785-mm6zz" Dec 06 07:45:04 crc kubenswrapper[4823]: I1206 07:45:04.712638 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416740-jwp69"] Dec 06 07:45:04 crc kubenswrapper[4823]: I1206 07:45:04.720853 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416740-jwp69"] Dec 06 07:45:05 crc kubenswrapper[4823]: I1206 07:45:05.159382 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09750040-6c82-466f-8cea-f040fb6ffb34" path="/var/lib/kubelet/pods/09750040-6c82-466f-8cea-f040fb6ffb34/volumes" Dec 06 07:45:06 crc kubenswrapper[4823]: I1206 07:45:06.051530 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:45:06 crc kubenswrapper[4823]: I1206 07:45:06.051609 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:45:36 crc kubenswrapper[4823]: I1206 07:45:36.052249 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:45:36 crc kubenswrapper[4823]: I1206 07:45:36.052765 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:45:45 crc kubenswrapper[4823]: I1206 07:45:45.347775 4823 scope.go:117] "RemoveContainer" containerID="a9bee78d997d8f83f0f433a932774ae808471261c0a84537a4b6fac400518a3d" Dec 06 07:46:04 crc kubenswrapper[4823]: I1206 07:46:04.141197 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r4qw5"] Dec 06 07:46:04 crc kubenswrapper[4823]: E1206 07:46:04.142238 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec582c83-c73c-43b7-853e-0bea57adffdc" containerName="collect-profiles" Dec 06 07:46:04 crc kubenswrapper[4823]: I1206 07:46:04.142260 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec582c83-c73c-43b7-853e-0bea57adffdc" containerName="collect-profiles" Dec 06 07:46:04 crc kubenswrapper[4823]: I1206 07:46:04.142597 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec582c83-c73c-43b7-853e-0bea57adffdc" containerName="collect-profiles" Dec 06 07:46:04 crc kubenswrapper[4823]: I1206 07:46:04.144370 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4qw5" Dec 06 07:46:04 crc kubenswrapper[4823]: I1206 07:46:04.167985 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r4qw5"] Dec 06 07:46:04 crc kubenswrapper[4823]: I1206 07:46:04.212422 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3661a8ee-1598-4d63-a779-72fd6f56d7fa-catalog-content\") pod \"certified-operators-r4qw5\" (UID: \"3661a8ee-1598-4d63-a779-72fd6f56d7fa\") " pod="openshift-marketplace/certified-operators-r4qw5" Dec 06 07:46:04 crc kubenswrapper[4823]: I1206 07:46:04.212546 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3661a8ee-1598-4d63-a779-72fd6f56d7fa-utilities\") pod \"certified-operators-r4qw5\" (UID: \"3661a8ee-1598-4d63-a779-72fd6f56d7fa\") " pod="openshift-marketplace/certified-operators-r4qw5" Dec 06 07:46:04 crc kubenswrapper[4823]: I1206 07:46:04.212725 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrj78\" (UniqueName: \"kubernetes.io/projected/3661a8ee-1598-4d63-a779-72fd6f56d7fa-kube-api-access-vrj78\") pod \"certified-operators-r4qw5\" (UID: \"3661a8ee-1598-4d63-a779-72fd6f56d7fa\") " pod="openshift-marketplace/certified-operators-r4qw5" Dec 06 07:46:04 crc kubenswrapper[4823]: I1206 07:46:04.314616 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrj78\" (UniqueName: \"kubernetes.io/projected/3661a8ee-1598-4d63-a779-72fd6f56d7fa-kube-api-access-vrj78\") pod \"certified-operators-r4qw5\" (UID: \"3661a8ee-1598-4d63-a779-72fd6f56d7fa\") " pod="openshift-marketplace/certified-operators-r4qw5" Dec 06 07:46:04 crc kubenswrapper[4823]: I1206 07:46:04.314956 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3661a8ee-1598-4d63-a779-72fd6f56d7fa-catalog-content\") pod \"certified-operators-r4qw5\" (UID: \"3661a8ee-1598-4d63-a779-72fd6f56d7fa\") " pod="openshift-marketplace/certified-operators-r4qw5" Dec 06 07:46:04 crc kubenswrapper[4823]: I1206 07:46:04.315062 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3661a8ee-1598-4d63-a779-72fd6f56d7fa-utilities\") pod \"certified-operators-r4qw5\" (UID: \"3661a8ee-1598-4d63-a779-72fd6f56d7fa\") " pod="openshift-marketplace/certified-operators-r4qw5" Dec 06 07:46:04 crc kubenswrapper[4823]: I1206 07:46:04.315803 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3661a8ee-1598-4d63-a779-72fd6f56d7fa-catalog-content\") pod \"certified-operators-r4qw5\" (UID: \"3661a8ee-1598-4d63-a779-72fd6f56d7fa\") " pod="openshift-marketplace/certified-operators-r4qw5" Dec 06 07:46:04 crc kubenswrapper[4823]: I1206 07:46:04.315875 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3661a8ee-1598-4d63-a779-72fd6f56d7fa-utilities\") pod \"certified-operators-r4qw5\" (UID: \"3661a8ee-1598-4d63-a779-72fd6f56d7fa\") " pod="openshift-marketplace/certified-operators-r4qw5" Dec 06 07:46:04 crc kubenswrapper[4823]: I1206 07:46:04.515894 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrj78\" (UniqueName: \"kubernetes.io/projected/3661a8ee-1598-4d63-a779-72fd6f56d7fa-kube-api-access-vrj78\") pod \"certified-operators-r4qw5\" (UID: \"3661a8ee-1598-4d63-a779-72fd6f56d7fa\") " pod="openshift-marketplace/certified-operators-r4qw5" Dec 06 07:46:04 crc kubenswrapper[4823]: I1206 07:46:04.768286 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4qw5" Dec 06 07:46:05 crc kubenswrapper[4823]: I1206 07:46:05.285165 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r4qw5"] Dec 06 07:46:06 crc kubenswrapper[4823]: I1206 07:46:06.051859 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:46:06 crc kubenswrapper[4823]: I1206 07:46:06.052189 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:46:06 crc kubenswrapper[4823]: I1206 07:46:06.052240 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 07:46:06 crc kubenswrapper[4823]: I1206 07:46:06.053064 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d406098cecd6a771940967f9978caf5dafb2641fab3e2c540d0dc94f6bb77259"} pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:46:06 crc kubenswrapper[4823]: I1206 07:46:06.053118 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" containerID="cri-o://d406098cecd6a771940967f9978caf5dafb2641fab3e2c540d0dc94f6bb77259" gracePeriod=600 Dec 06 07:46:06 crc kubenswrapper[4823]: I1206 07:46:06.237936 4823 generic.go:334] "Generic (PLEG): container finished" podID="3661a8ee-1598-4d63-a779-72fd6f56d7fa" containerID="4c6ebfe5ca2bcb7f36589d329f8c4091a9da2b33bbe974f1a38b75fd9f571cc2" exitCode=0 Dec 06 07:46:06 crc kubenswrapper[4823]: I1206 07:46:06.237987 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4qw5" event={"ID":"3661a8ee-1598-4d63-a779-72fd6f56d7fa","Type":"ContainerDied","Data":"4c6ebfe5ca2bcb7f36589d329f8c4091a9da2b33bbe974f1a38b75fd9f571cc2"} Dec 06 07:46:06 crc kubenswrapper[4823]: I1206 07:46:06.238019 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4qw5" event={"ID":"3661a8ee-1598-4d63-a779-72fd6f56d7fa","Type":"ContainerStarted","Data":"177a452339dfc66d82fcc70f525e62807fbc7d7ed6f371c2817f31ccda001a9f"} Dec 06 07:46:06 crc kubenswrapper[4823]: I1206 07:46:06.629456 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 07:46:07 crc kubenswrapper[4823]: I1206 07:46:07.256702 4823 generic.go:334] "Generic (PLEG): container finished" podID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerID="d406098cecd6a771940967f9978caf5dafb2641fab3e2c540d0dc94f6bb77259" exitCode=0 Dec 06 07:46:07 crc kubenswrapper[4823]: I1206 07:46:07.256813 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerDied","Data":"d406098cecd6a771940967f9978caf5dafb2641fab3e2c540d0dc94f6bb77259"} Dec 06 07:46:07 crc kubenswrapper[4823]: I1206 07:46:07.257485 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerStarted","Data":"6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8"} Dec 06 07:46:07 crc kubenswrapper[4823]: I1206 07:46:07.257522 4823 scope.go:117] "RemoveContainer" containerID="3506dfe82fc2bec91d6c85591c05f423a7f7814584b6935dff27eff91a39c72d" Dec 06 07:46:08 crc kubenswrapper[4823]: I1206 07:46:08.269318 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4qw5" event={"ID":"3661a8ee-1598-4d63-a779-72fd6f56d7fa","Type":"ContainerStarted","Data":"0d5d2dd24167e46ca29e2c5742c5a60d570880a582343b28d818323addb6fe17"} Dec 06 07:46:08 crc kubenswrapper[4823]: I1206 07:46:08.728125 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9xns9"] Dec 06 07:46:08 crc kubenswrapper[4823]: I1206 07:46:08.733225 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9xns9" Dec 06 07:46:08 crc kubenswrapper[4823]: I1206 07:46:08.741340 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9xns9"] Dec 06 07:46:08 crc kubenswrapper[4823]: I1206 07:46:08.839257 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8nlt\" (UniqueName: \"kubernetes.io/projected/9714a6d8-41fb-464f-85ec-c4ba3ae86033-kube-api-access-z8nlt\") pod \"redhat-marketplace-9xns9\" (UID: \"9714a6d8-41fb-464f-85ec-c4ba3ae86033\") " pod="openshift-marketplace/redhat-marketplace-9xns9" Dec 06 07:46:08 crc kubenswrapper[4823]: I1206 07:46:08.839771 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9714a6d8-41fb-464f-85ec-c4ba3ae86033-catalog-content\") pod \"redhat-marketplace-9xns9\" (UID: \"9714a6d8-41fb-464f-85ec-c4ba3ae86033\") " pod="openshift-marketplace/redhat-marketplace-9xns9" Dec 06 07:46:08 crc kubenswrapper[4823]: I1206 07:46:08.840009 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9714a6d8-41fb-464f-85ec-c4ba3ae86033-utilities\") pod \"redhat-marketplace-9xns9\" (UID: \"9714a6d8-41fb-464f-85ec-c4ba3ae86033\") " pod="openshift-marketplace/redhat-marketplace-9xns9" Dec 06 07:46:08 crc kubenswrapper[4823]: I1206 07:46:08.942710 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8nlt\" (UniqueName: \"kubernetes.io/projected/9714a6d8-41fb-464f-85ec-c4ba3ae86033-kube-api-access-z8nlt\") pod \"redhat-marketplace-9xns9\" (UID: \"9714a6d8-41fb-464f-85ec-c4ba3ae86033\") " pod="openshift-marketplace/redhat-marketplace-9xns9" Dec 06 07:46:08 crc kubenswrapper[4823]: I1206 07:46:08.942895 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9714a6d8-41fb-464f-85ec-c4ba3ae86033-catalog-content\") pod \"redhat-marketplace-9xns9\" (UID: \"9714a6d8-41fb-464f-85ec-c4ba3ae86033\") " pod="openshift-marketplace/redhat-marketplace-9xns9" Dec 06 07:46:08 crc kubenswrapper[4823]: I1206 07:46:08.942982 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9714a6d8-41fb-464f-85ec-c4ba3ae86033-utilities\") pod \"redhat-marketplace-9xns9\" (UID: \"9714a6d8-41fb-464f-85ec-c4ba3ae86033\") " pod="openshift-marketplace/redhat-marketplace-9xns9" Dec 06 07:46:08 crc kubenswrapper[4823]: I1206 07:46:08.943421 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9714a6d8-41fb-464f-85ec-c4ba3ae86033-catalog-content\") pod \"redhat-marketplace-9xns9\" (UID: \"9714a6d8-41fb-464f-85ec-c4ba3ae86033\") " pod="openshift-marketplace/redhat-marketplace-9xns9" Dec 06 07:46:08 crc kubenswrapper[4823]: I1206 07:46:08.943533 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9714a6d8-41fb-464f-85ec-c4ba3ae86033-utilities\") pod \"redhat-marketplace-9xns9\" (UID: \"9714a6d8-41fb-464f-85ec-c4ba3ae86033\") " pod="openshift-marketplace/redhat-marketplace-9xns9" Dec 06 07:46:08 crc kubenswrapper[4823]: I1206 07:46:08.976535 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8nlt\" (UniqueName: \"kubernetes.io/projected/9714a6d8-41fb-464f-85ec-c4ba3ae86033-kube-api-access-z8nlt\") pod \"redhat-marketplace-9xns9\" (UID: \"9714a6d8-41fb-464f-85ec-c4ba3ae86033\") " pod="openshift-marketplace/redhat-marketplace-9xns9" Dec 06 07:46:09 crc kubenswrapper[4823]: I1206 07:46:09.056657 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9xns9" Dec 06 07:46:09 crc kubenswrapper[4823]: I1206 07:46:09.288824 4823 generic.go:334] "Generic (PLEG): container finished" podID="3661a8ee-1598-4d63-a779-72fd6f56d7fa" containerID="0d5d2dd24167e46ca29e2c5742c5a60d570880a582343b28d818323addb6fe17" exitCode=0 Dec 06 07:46:09 crc kubenswrapper[4823]: I1206 07:46:09.289111 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4qw5" event={"ID":"3661a8ee-1598-4d63-a779-72fd6f56d7fa","Type":"ContainerDied","Data":"0d5d2dd24167e46ca29e2c5742c5a60d570880a582343b28d818323addb6fe17"} Dec 06 07:46:09 crc kubenswrapper[4823]: W1206 07:46:09.588507 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9714a6d8_41fb_464f_85ec_c4ba3ae86033.slice/crio-6ec76f0ed4689a7a53c77516bb62e051412e082d0c94f84f08bd8f536f65dd97 WatchSource:0}: Error finding container 6ec76f0ed4689a7a53c77516bb62e051412e082d0c94f84f08bd8f536f65dd97: Status 404 returned error can't find the container with id 6ec76f0ed4689a7a53c77516bb62e051412e082d0c94f84f08bd8f536f65dd97 Dec 06 07:46:09 crc kubenswrapper[4823]: I1206 07:46:09.600593 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9xns9"] Dec 06 07:46:10 crc kubenswrapper[4823]: I1206 07:46:10.300615 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4qw5" event={"ID":"3661a8ee-1598-4d63-a779-72fd6f56d7fa","Type":"ContainerStarted","Data":"913c4d016c23b0aae3e3937609e300b1a67dd20c3de2153eea1bbc7955a48573"} Dec 06 07:46:10 crc kubenswrapper[4823]: I1206 07:46:10.304764 4823 generic.go:334] "Generic (PLEG): container finished" podID="9714a6d8-41fb-464f-85ec-c4ba3ae86033" containerID="16d8971af53dd63d5c84571c1e6f7bfc9025fd02fa29b31da8518babd9dc077c" exitCode=0 Dec 06 07:46:10 crc kubenswrapper[4823]: I1206 07:46:10.304802 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9xns9" event={"ID":"9714a6d8-41fb-464f-85ec-c4ba3ae86033","Type":"ContainerDied","Data":"16d8971af53dd63d5c84571c1e6f7bfc9025fd02fa29b31da8518babd9dc077c"} Dec 06 07:46:10 crc kubenswrapper[4823]: I1206 07:46:10.304824 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9xns9" event={"ID":"9714a6d8-41fb-464f-85ec-c4ba3ae86033","Type":"ContainerStarted","Data":"6ec76f0ed4689a7a53c77516bb62e051412e082d0c94f84f08bd8f536f65dd97"} Dec 06 07:46:10 crc kubenswrapper[4823]: I1206 07:46:10.322493 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r4qw5" podStartSLOduration=3.09073086 podStartE2EDuration="6.322449342s" podCreationTimestamp="2025-12-06 07:46:04 +0000 UTC" firstStartedPulling="2025-12-06 07:46:06.628938973 +0000 UTC m=+4867.914690923" lastFinishedPulling="2025-12-06 07:46:09.860657445 +0000 UTC m=+4871.146409405" observedRunningTime="2025-12-06 07:46:10.316091609 +0000 UTC m=+4871.601843569" watchObservedRunningTime="2025-12-06 07:46:10.322449342 +0000 UTC m=+4871.608201302" Dec 06 07:46:11 crc kubenswrapper[4823]: I1206 07:46:11.319260 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9xns9" event={"ID":"9714a6d8-41fb-464f-85ec-c4ba3ae86033","Type":"ContainerStarted","Data":"e48e213bdf1c131b0739bb861ca198d9361939c6dc40ea8db7be105f730530d2"} Dec 06 07:46:12 crc kubenswrapper[4823]: I1206 07:46:12.331065 4823 generic.go:334] "Generic (PLEG): container finished" podID="9714a6d8-41fb-464f-85ec-c4ba3ae86033" containerID="e48e213bdf1c131b0739bb861ca198d9361939c6dc40ea8db7be105f730530d2" exitCode=0 Dec 06 07:46:12 crc kubenswrapper[4823]: I1206 07:46:12.331109 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9xns9" event={"ID":"9714a6d8-41fb-464f-85ec-c4ba3ae86033","Type":"ContainerDied","Data":"e48e213bdf1c131b0739bb861ca198d9361939c6dc40ea8db7be105f730530d2"} Dec 06 07:46:14 crc kubenswrapper[4823]: I1206 07:46:14.355584 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9xns9" event={"ID":"9714a6d8-41fb-464f-85ec-c4ba3ae86033","Type":"ContainerStarted","Data":"9fe04f868dbe5fc02d588d36c7e0f5e1475028fa8de2d1f39ac5837004e183a5"} Dec 06 07:46:14 crc kubenswrapper[4823]: I1206 07:46:14.376477 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9xns9" podStartSLOduration=2.754190613 podStartE2EDuration="6.376458498s" podCreationTimestamp="2025-12-06 07:46:08 +0000 UTC" firstStartedPulling="2025-12-06 07:46:10.306439541 +0000 UTC m=+4871.592191501" lastFinishedPulling="2025-12-06 07:46:13.928707426 +0000 UTC m=+4875.214459386" observedRunningTime="2025-12-06 07:46:14.376126568 +0000 UTC m=+4875.661878528" watchObservedRunningTime="2025-12-06 07:46:14.376458498 +0000 UTC m=+4875.662210468" Dec 06 07:46:14 crc kubenswrapper[4823]: I1206 07:46:14.772001 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r4qw5" Dec 06 07:46:14 crc kubenswrapper[4823]: I1206 07:46:14.772081 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r4qw5" Dec 06 07:46:14 crc kubenswrapper[4823]: I1206 07:46:14.835517 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r4qw5" Dec 06 07:46:15 crc kubenswrapper[4823]: I1206 07:46:15.691089 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r4qw5" Dec 06 07:46:16 crc kubenswrapper[4823]: I1206 07:46:16.719271 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r4qw5"] Dec 06 07:46:17 crc kubenswrapper[4823]: I1206 07:46:17.384439 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r4qw5" podUID="3661a8ee-1598-4d63-a779-72fd6f56d7fa" containerName="registry-server" containerID="cri-o://913c4d016c23b0aae3e3937609e300b1a67dd20c3de2153eea1bbc7955a48573" gracePeriod=2 Dec 06 07:46:19 crc kubenswrapper[4823]: I1206 07:46:19.056905 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9xns9" Dec 06 07:46:19 crc kubenswrapper[4823]: I1206 07:46:19.057987 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9xns9" Dec 06 07:46:19 crc kubenswrapper[4823]: I1206 07:46:19.111434 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9xns9" Dec 06 07:46:19 crc kubenswrapper[4823]: I1206 07:46:19.405363 4823 generic.go:334] "Generic (PLEG): container finished" podID="3661a8ee-1598-4d63-a779-72fd6f56d7fa" containerID="913c4d016c23b0aae3e3937609e300b1a67dd20c3de2153eea1bbc7955a48573" exitCode=0 Dec 06 07:46:19 crc kubenswrapper[4823]: I1206 07:46:19.405473 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4qw5" event={"ID":"3661a8ee-1598-4d63-a779-72fd6f56d7fa","Type":"ContainerDied","Data":"913c4d016c23b0aae3e3937609e300b1a67dd20c3de2153eea1bbc7955a48573"} Dec 06 07:46:19 crc kubenswrapper[4823]: I1206 07:46:19.449754 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9xns9" Dec 06 07:46:20 crc kubenswrapper[4823]: I1206 07:46:20.119726 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9xns9"] Dec 06 07:46:20 crc kubenswrapper[4823]: I1206 07:46:20.594703 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4qw5" Dec 06 07:46:20 crc kubenswrapper[4823]: I1206 07:46:20.713598 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrj78\" (UniqueName: \"kubernetes.io/projected/3661a8ee-1598-4d63-a779-72fd6f56d7fa-kube-api-access-vrj78\") pod \"3661a8ee-1598-4d63-a779-72fd6f56d7fa\" (UID: \"3661a8ee-1598-4d63-a779-72fd6f56d7fa\") " Dec 06 07:46:20 crc kubenswrapper[4823]: I1206 07:46:20.713731 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3661a8ee-1598-4d63-a779-72fd6f56d7fa-utilities\") pod \"3661a8ee-1598-4d63-a779-72fd6f56d7fa\" (UID: \"3661a8ee-1598-4d63-a779-72fd6f56d7fa\") " Dec 06 07:46:20 crc kubenswrapper[4823]: I1206 07:46:20.714016 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3661a8ee-1598-4d63-a779-72fd6f56d7fa-catalog-content\") pod \"3661a8ee-1598-4d63-a779-72fd6f56d7fa\" (UID: \"3661a8ee-1598-4d63-a779-72fd6f56d7fa\") " Dec 06 07:46:20 crc kubenswrapper[4823]: I1206 07:46:20.718227 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3661a8ee-1598-4d63-a779-72fd6f56d7fa-utilities" (OuterVolumeSpecName: "utilities") pod "3661a8ee-1598-4d63-a779-72fd6f56d7fa" (UID: "3661a8ee-1598-4d63-a779-72fd6f56d7fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:46:20 crc kubenswrapper[4823]: I1206 07:46:20.728979 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3661a8ee-1598-4d63-a779-72fd6f56d7fa-kube-api-access-vrj78" (OuterVolumeSpecName: "kube-api-access-vrj78") pod "3661a8ee-1598-4d63-a779-72fd6f56d7fa" (UID: "3661a8ee-1598-4d63-a779-72fd6f56d7fa"). InnerVolumeSpecName "kube-api-access-vrj78". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:46:20 crc kubenswrapper[4823]: I1206 07:46:20.794885 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3661a8ee-1598-4d63-a779-72fd6f56d7fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3661a8ee-1598-4d63-a779-72fd6f56d7fa" (UID: "3661a8ee-1598-4d63-a779-72fd6f56d7fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:46:20 crc kubenswrapper[4823]: I1206 07:46:20.816629 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrj78\" (UniqueName: \"kubernetes.io/projected/3661a8ee-1598-4d63-a779-72fd6f56d7fa-kube-api-access-vrj78\") on node \"crc\" DevicePath \"\"" Dec 06 07:46:20 crc kubenswrapper[4823]: I1206 07:46:20.816694 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3661a8ee-1598-4d63-a779-72fd6f56d7fa-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:46:20 crc kubenswrapper[4823]: I1206 07:46:20.816706 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3661a8ee-1598-4d63-a779-72fd6f56d7fa-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:46:21 crc kubenswrapper[4823]: I1206 07:46:21.427861 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4qw5" event={"ID":"3661a8ee-1598-4d63-a779-72fd6f56d7fa","Type":"ContainerDied","Data":"177a452339dfc66d82fcc70f525e62807fbc7d7ed6f371c2817f31ccda001a9f"} Dec 06 07:46:21 crc kubenswrapper[4823]: I1206 07:46:21.428186 4823 scope.go:117] "RemoveContainer" containerID="913c4d016c23b0aae3e3937609e300b1a67dd20c3de2153eea1bbc7955a48573" Dec 06 07:46:21 crc kubenswrapper[4823]: I1206 07:46:21.427887 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4qw5" Dec 06 07:46:21 crc kubenswrapper[4823]: I1206 07:46:21.428027 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9xns9" podUID="9714a6d8-41fb-464f-85ec-c4ba3ae86033" containerName="registry-server" containerID="cri-o://9fe04f868dbe5fc02d588d36c7e0f5e1475028fa8de2d1f39ac5837004e183a5" gracePeriod=2 Dec 06 07:46:21 crc kubenswrapper[4823]: I1206 07:46:21.453623 4823 scope.go:117] "RemoveContainer" containerID="0d5d2dd24167e46ca29e2c5742c5a60d570880a582343b28d818323addb6fe17" Dec 06 07:46:21 crc kubenswrapper[4823]: I1206 07:46:21.465410 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r4qw5"] Dec 06 07:46:21 crc kubenswrapper[4823]: I1206 07:46:21.477479 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r4qw5"] Dec 06 07:46:21 crc kubenswrapper[4823]: I1206 07:46:21.488133 4823 scope.go:117] "RemoveContainer" containerID="4c6ebfe5ca2bcb7f36589d329f8c4091a9da2b33bbe974f1a38b75fd9f571cc2" Dec 06 07:46:22 crc kubenswrapper[4823]: I1206 07:46:22.156250 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9xns9" Dec 06 07:46:22 crc kubenswrapper[4823]: I1206 07:46:22.246342 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9714a6d8-41fb-464f-85ec-c4ba3ae86033-catalog-content\") pod \"9714a6d8-41fb-464f-85ec-c4ba3ae86033\" (UID: \"9714a6d8-41fb-464f-85ec-c4ba3ae86033\") " Dec 06 07:46:22 crc kubenswrapper[4823]: I1206 07:46:22.246553 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8nlt\" (UniqueName: \"kubernetes.io/projected/9714a6d8-41fb-464f-85ec-c4ba3ae86033-kube-api-access-z8nlt\") pod \"9714a6d8-41fb-464f-85ec-c4ba3ae86033\" (UID: \"9714a6d8-41fb-464f-85ec-c4ba3ae86033\") " Dec 06 07:46:22 crc kubenswrapper[4823]: I1206 07:46:22.246848 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9714a6d8-41fb-464f-85ec-c4ba3ae86033-utilities\") pod \"9714a6d8-41fb-464f-85ec-c4ba3ae86033\" (UID: \"9714a6d8-41fb-464f-85ec-c4ba3ae86033\") " Dec 06 07:46:22 crc kubenswrapper[4823]: I1206 07:46:22.248101 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9714a6d8-41fb-464f-85ec-c4ba3ae86033-utilities" (OuterVolumeSpecName: "utilities") pod "9714a6d8-41fb-464f-85ec-c4ba3ae86033" (UID: "9714a6d8-41fb-464f-85ec-c4ba3ae86033"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:46:22 crc kubenswrapper[4823]: I1206 07:46:22.254248 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9714a6d8-41fb-464f-85ec-c4ba3ae86033-kube-api-access-z8nlt" (OuterVolumeSpecName: "kube-api-access-z8nlt") pod "9714a6d8-41fb-464f-85ec-c4ba3ae86033" (UID: "9714a6d8-41fb-464f-85ec-c4ba3ae86033"). InnerVolumeSpecName "kube-api-access-z8nlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:46:22 crc kubenswrapper[4823]: I1206 07:46:22.255032 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9714a6d8-41fb-464f-85ec-c4ba3ae86033-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:46:22 crc kubenswrapper[4823]: I1206 07:46:22.271695 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9714a6d8-41fb-464f-85ec-c4ba3ae86033-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9714a6d8-41fb-464f-85ec-c4ba3ae86033" (UID: "9714a6d8-41fb-464f-85ec-c4ba3ae86033"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:46:22 crc kubenswrapper[4823]: I1206 07:46:22.356565 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9714a6d8-41fb-464f-85ec-c4ba3ae86033-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:46:22 crc kubenswrapper[4823]: I1206 07:46:22.356607 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8nlt\" (UniqueName: \"kubernetes.io/projected/9714a6d8-41fb-464f-85ec-c4ba3ae86033-kube-api-access-z8nlt\") on node \"crc\" DevicePath \"\"" Dec 06 07:46:22 crc kubenswrapper[4823]: I1206 07:46:22.442057 4823 generic.go:334] "Generic (PLEG): container finished" podID="9714a6d8-41fb-464f-85ec-c4ba3ae86033" containerID="9fe04f868dbe5fc02d588d36c7e0f5e1475028fa8de2d1f39ac5837004e183a5" exitCode=0 Dec 06 07:46:22 crc kubenswrapper[4823]: I1206 07:46:22.442142 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9xns9" event={"ID":"9714a6d8-41fb-464f-85ec-c4ba3ae86033","Type":"ContainerDied","Data":"9fe04f868dbe5fc02d588d36c7e0f5e1475028fa8de2d1f39ac5837004e183a5"} Dec 06 07:46:22 crc kubenswrapper[4823]: I1206 07:46:22.442182 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9xns9" event={"ID":"9714a6d8-41fb-464f-85ec-c4ba3ae86033","Type":"ContainerDied","Data":"6ec76f0ed4689a7a53c77516bb62e051412e082d0c94f84f08bd8f536f65dd97"} Dec 06 07:46:22 crc kubenswrapper[4823]: I1206 07:46:22.442211 4823 scope.go:117] "RemoveContainer" containerID="9fe04f868dbe5fc02d588d36c7e0f5e1475028fa8de2d1f39ac5837004e183a5" Dec 06 07:46:22 crc kubenswrapper[4823]: I1206 07:46:22.442284 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9xns9" Dec 06 07:46:22 crc kubenswrapper[4823]: I1206 07:46:22.471152 4823 scope.go:117] "RemoveContainer" containerID="e48e213bdf1c131b0739bb861ca198d9361939c6dc40ea8db7be105f730530d2" Dec 06 07:46:22 crc kubenswrapper[4823]: I1206 07:46:22.492106 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9xns9"] Dec 06 07:46:22 crc kubenswrapper[4823]: I1206 07:46:22.504945 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9xns9"] Dec 06 07:46:22 crc kubenswrapper[4823]: I1206 07:46:22.520355 4823 scope.go:117] "RemoveContainer" containerID="16d8971af53dd63d5c84571c1e6f7bfc9025fd02fa29b31da8518babd9dc077c" Dec 06 07:46:22 crc kubenswrapper[4823]: I1206 07:46:22.544884 4823 scope.go:117] "RemoveContainer" containerID="9fe04f868dbe5fc02d588d36c7e0f5e1475028fa8de2d1f39ac5837004e183a5" Dec 06 07:46:22 crc kubenswrapper[4823]: E1206 07:46:22.545759 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fe04f868dbe5fc02d588d36c7e0f5e1475028fa8de2d1f39ac5837004e183a5\": container with ID starting with 9fe04f868dbe5fc02d588d36c7e0f5e1475028fa8de2d1f39ac5837004e183a5 not found: ID does not exist" containerID="9fe04f868dbe5fc02d588d36c7e0f5e1475028fa8de2d1f39ac5837004e183a5" Dec 06 07:46:22 crc kubenswrapper[4823]: I1206 07:46:22.545793 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fe04f868dbe5fc02d588d36c7e0f5e1475028fa8de2d1f39ac5837004e183a5"} err="failed to get container status \"9fe04f868dbe5fc02d588d36c7e0f5e1475028fa8de2d1f39ac5837004e183a5\": rpc error: code = NotFound desc = could not find container \"9fe04f868dbe5fc02d588d36c7e0f5e1475028fa8de2d1f39ac5837004e183a5\": container with ID starting with 9fe04f868dbe5fc02d588d36c7e0f5e1475028fa8de2d1f39ac5837004e183a5 not found: ID does not exist" Dec 06 07:46:22 crc kubenswrapper[4823]: I1206 07:46:22.545814 4823 scope.go:117] "RemoveContainer" containerID="e48e213bdf1c131b0739bb861ca198d9361939c6dc40ea8db7be105f730530d2" Dec 06 07:46:22 crc kubenswrapper[4823]: E1206 07:46:22.546210 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e48e213bdf1c131b0739bb861ca198d9361939c6dc40ea8db7be105f730530d2\": container with ID starting with e48e213bdf1c131b0739bb861ca198d9361939c6dc40ea8db7be105f730530d2 not found: ID does not exist" containerID="e48e213bdf1c131b0739bb861ca198d9361939c6dc40ea8db7be105f730530d2" Dec 06 07:46:22 crc kubenswrapper[4823]: I1206 07:46:22.546245 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e48e213bdf1c131b0739bb861ca198d9361939c6dc40ea8db7be105f730530d2"} err="failed to get container status \"e48e213bdf1c131b0739bb861ca198d9361939c6dc40ea8db7be105f730530d2\": rpc error: code = NotFound desc = could not find container \"e48e213bdf1c131b0739bb861ca198d9361939c6dc40ea8db7be105f730530d2\": container with ID starting with e48e213bdf1c131b0739bb861ca198d9361939c6dc40ea8db7be105f730530d2 not found: ID does not exist" Dec 06 07:46:22 crc kubenswrapper[4823]: I1206 07:46:22.546266 4823 scope.go:117] "RemoveContainer" containerID="16d8971af53dd63d5c84571c1e6f7bfc9025fd02fa29b31da8518babd9dc077c" Dec 06 07:46:22 crc kubenswrapper[4823]: E1206 07:46:22.546595 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16d8971af53dd63d5c84571c1e6f7bfc9025fd02fa29b31da8518babd9dc077c\": container with ID starting with 16d8971af53dd63d5c84571c1e6f7bfc9025fd02fa29b31da8518babd9dc077c not found: ID does not exist" containerID="16d8971af53dd63d5c84571c1e6f7bfc9025fd02fa29b31da8518babd9dc077c" Dec 06 07:46:22 crc kubenswrapper[4823]: I1206 07:46:22.546641 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16d8971af53dd63d5c84571c1e6f7bfc9025fd02fa29b31da8518babd9dc077c"} err="failed to get container status \"16d8971af53dd63d5c84571c1e6f7bfc9025fd02fa29b31da8518babd9dc077c\": rpc error: code = NotFound desc = could not find container \"16d8971af53dd63d5c84571c1e6f7bfc9025fd02fa29b31da8518babd9dc077c\": container with ID starting with 16d8971af53dd63d5c84571c1e6f7bfc9025fd02fa29b31da8518babd9dc077c not found: ID does not exist" Dec 06 07:46:23 crc kubenswrapper[4823]: I1206 07:46:23.155639 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3661a8ee-1598-4d63-a779-72fd6f56d7fa" path="/var/lib/kubelet/pods/3661a8ee-1598-4d63-a779-72fd6f56d7fa/volumes" Dec 06 07:46:23 crc kubenswrapper[4823]: I1206 07:46:23.157109 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9714a6d8-41fb-464f-85ec-c4ba3ae86033" path="/var/lib/kubelet/pods/9714a6d8-41fb-464f-85ec-c4ba3ae86033/volumes" Dec 06 07:48:06 crc kubenswrapper[4823]: I1206 07:48:06.052148 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:48:06 crc kubenswrapper[4823]: I1206 07:48:06.053950 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:48:36 crc kubenswrapper[4823]: I1206 07:48:36.052876 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:48:36 crc kubenswrapper[4823]: I1206 07:48:36.053421 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:49:06 crc kubenswrapper[4823]: I1206 07:49:06.052206 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:49:06 crc kubenswrapper[4823]: I1206 07:49:06.052773 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:49:06 crc kubenswrapper[4823]: I1206 07:49:06.052828 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 07:49:06 crc kubenswrapper[4823]: I1206 07:49:06.053709 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8"} pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:49:06 crc kubenswrapper[4823]: I1206 07:49:06.053755 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" containerID="cri-o://6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" gracePeriod=600 Dec 06 07:49:06 crc kubenswrapper[4823]: E1206 07:49:06.175572 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:49:06 crc kubenswrapper[4823]: I1206 07:49:06.580226 4823 generic.go:334] "Generic (PLEG): container finished" podID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerID="6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" exitCode=0 Dec 06 07:49:06 crc kubenswrapper[4823]: I1206 07:49:06.580284 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerDied","Data":"6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8"} Dec 06 07:49:06 crc kubenswrapper[4823]: I1206 07:49:06.580329 4823 scope.go:117] "RemoveContainer" containerID="d406098cecd6a771940967f9978caf5dafb2641fab3e2c540d0dc94f6bb77259" Dec 06 07:49:06 crc kubenswrapper[4823]: I1206 07:49:06.581489 4823 scope.go:117] "RemoveContainer" containerID="6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" Dec 06 07:49:06 crc kubenswrapper[4823]: E1206 07:49:06.581952 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:49:17 crc kubenswrapper[4823]: I1206 07:49:17.141599 4823 scope.go:117] "RemoveContainer" containerID="6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" Dec 06 07:49:17 crc kubenswrapper[4823]: E1206 07:49:17.142422 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:49:30 crc kubenswrapper[4823]: I1206 07:49:30.140577 4823 scope.go:117] "RemoveContainer" containerID="6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" Dec 06 07:49:30 crc kubenswrapper[4823]: E1206 07:49:30.141524 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:49:45 crc kubenswrapper[4823]: I1206 07:49:45.141989 4823 scope.go:117] "RemoveContainer" containerID="6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" Dec 06 07:49:45 crc kubenswrapper[4823]: E1206 07:49:45.142866 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:50:00 crc kubenswrapper[4823]: I1206 07:50:00.141231 4823 scope.go:117] "RemoveContainer" containerID="6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" Dec 06 07:50:00 crc kubenswrapper[4823]: E1206 07:50:00.142235 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:50:13 crc kubenswrapper[4823]: I1206 07:50:13.141291 4823 scope.go:117] "RemoveContainer" containerID="6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" Dec 06 07:50:13 crc kubenswrapper[4823]: E1206 07:50:13.142056 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:50:28 crc kubenswrapper[4823]: I1206 07:50:28.141088 4823 scope.go:117] "RemoveContainer" containerID="6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" Dec 06 07:50:28 crc kubenswrapper[4823]: E1206 07:50:28.142102 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:50:42 crc kubenswrapper[4823]: I1206 07:50:42.142772 4823 scope.go:117] "RemoveContainer" containerID="6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" Dec 06 07:50:42 crc kubenswrapper[4823]: E1206 07:50:42.143632 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:50:53 crc kubenswrapper[4823]: I1206 07:50:53.141636 4823 scope.go:117] "RemoveContainer" containerID="6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" Dec 06 07:50:53 crc kubenswrapper[4823]: E1206 07:50:53.142412 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:51:05 crc kubenswrapper[4823]: I1206 07:51:05.141646 4823 scope.go:117] "RemoveContainer" containerID="6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" Dec 06 07:51:05 crc kubenswrapper[4823]: E1206 07:51:05.142430 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:51:18 crc kubenswrapper[4823]: I1206 07:51:18.140563 4823 scope.go:117] "RemoveContainer" containerID="6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" Dec 06 07:51:18 crc kubenswrapper[4823]: E1206 07:51:18.141386 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:51:30 crc kubenswrapper[4823]: I1206 07:51:30.150133 4823 scope.go:117] "RemoveContainer" containerID="6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" Dec 06 07:51:30 crc kubenswrapper[4823]: E1206 07:51:30.152532 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:51:41 crc kubenswrapper[4823]: I1206 07:51:41.141217 4823 scope.go:117] "RemoveContainer" containerID="6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" Dec 06 07:51:41 crc kubenswrapper[4823]: E1206 07:51:41.142119 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:51:56 crc kubenswrapper[4823]: I1206 07:51:56.141212 4823 scope.go:117] "RemoveContainer" containerID="6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" Dec 06 07:51:56 crc kubenswrapper[4823]: E1206 07:51:56.142249 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:52:11 crc kubenswrapper[4823]: I1206 07:52:11.140786 4823 scope.go:117] "RemoveContainer" containerID="6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" Dec 06 07:52:11 crc kubenswrapper[4823]: E1206 07:52:11.141619 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:52:25 crc kubenswrapper[4823]: I1206 07:52:25.140907 4823 scope.go:117] "RemoveContainer" containerID="6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" Dec 06 07:52:25 crc kubenswrapper[4823]: E1206 07:52:25.141591 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:52:39 crc kubenswrapper[4823]: I1206 07:52:39.149866 4823 scope.go:117] "RemoveContainer" containerID="6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" Dec 06 07:52:39 crc kubenswrapper[4823]: E1206 07:52:39.150758 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:52:54 crc kubenswrapper[4823]: I1206 07:52:54.141180 4823 scope.go:117] "RemoveContainer" containerID="6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" Dec 06 07:52:54 crc kubenswrapper[4823]: E1206 07:52:54.142015 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:53:09 crc kubenswrapper[4823]: I1206 07:53:09.153603 4823 scope.go:117] "RemoveContainer" containerID="6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" Dec 06 07:53:09 crc kubenswrapper[4823]: E1206 07:53:09.154755 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:53:22 crc kubenswrapper[4823]: I1206 07:53:22.144348 4823 scope.go:117] "RemoveContainer" containerID="6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" Dec 06 07:53:22 crc kubenswrapper[4823]: E1206 07:53:22.145113 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:53:37 crc kubenswrapper[4823]: I1206 07:53:37.142279 4823 scope.go:117] "RemoveContainer" containerID="6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" Dec 06 07:53:37 crc kubenswrapper[4823]: E1206 07:53:37.143198 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:53:47 crc kubenswrapper[4823]: I1206 07:53:47.063980 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rchcx"] Dec 06 07:53:47 crc kubenswrapper[4823]: E1206 07:53:47.065092 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9714a6d8-41fb-464f-85ec-c4ba3ae86033" containerName="extract-utilities" Dec 06 07:53:47 crc kubenswrapper[4823]: I1206 07:53:47.065110 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9714a6d8-41fb-464f-85ec-c4ba3ae86033" containerName="extract-utilities" Dec 06 07:53:47 crc kubenswrapper[4823]: E1206 07:53:47.065119 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9714a6d8-41fb-464f-85ec-c4ba3ae86033" containerName="registry-server" Dec 06 07:53:47 crc kubenswrapper[4823]: I1206 07:53:47.065126 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9714a6d8-41fb-464f-85ec-c4ba3ae86033" containerName="registry-server" Dec 06 07:53:47 crc kubenswrapper[4823]: E1206 07:53:47.065145 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3661a8ee-1598-4d63-a779-72fd6f56d7fa" containerName="extract-utilities" Dec 06 07:53:47 crc kubenswrapper[4823]: I1206 07:53:47.065152 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3661a8ee-1598-4d63-a779-72fd6f56d7fa" containerName="extract-utilities" Dec 06 07:53:47 crc kubenswrapper[4823]: E1206 07:53:47.065175 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9714a6d8-41fb-464f-85ec-c4ba3ae86033" containerName="extract-content" Dec 06 07:53:47 crc kubenswrapper[4823]: I1206 07:53:47.065183 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9714a6d8-41fb-464f-85ec-c4ba3ae86033" containerName="extract-content" Dec 06 07:53:47 crc kubenswrapper[4823]: E1206 07:53:47.065193 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3661a8ee-1598-4d63-a779-72fd6f56d7fa" containerName="registry-server" Dec 06 07:53:47 crc kubenswrapper[4823]: I1206 07:53:47.065199 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3661a8ee-1598-4d63-a779-72fd6f56d7fa" containerName="registry-server" Dec 06 07:53:47 crc kubenswrapper[4823]: E1206 07:53:47.065213 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3661a8ee-1598-4d63-a779-72fd6f56d7fa" containerName="extract-content" Dec 06 07:53:47 crc kubenswrapper[4823]: I1206 07:53:47.065219 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3661a8ee-1598-4d63-a779-72fd6f56d7fa" containerName="extract-content" Dec 06 07:53:47 crc kubenswrapper[4823]: I1206 07:53:47.065445 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9714a6d8-41fb-464f-85ec-c4ba3ae86033" containerName="registry-server" Dec 06 07:53:47 crc kubenswrapper[4823]: I1206 07:53:47.065461 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3661a8ee-1598-4d63-a779-72fd6f56d7fa" containerName="registry-server" Dec 06 07:53:47 crc kubenswrapper[4823]: I1206 07:53:47.067170 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rchcx" Dec 06 07:53:47 crc kubenswrapper[4823]: I1206 07:53:47.103768 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rchcx"] Dec 06 07:53:47 crc kubenswrapper[4823]: I1206 07:53:47.243743 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab85123c-6016-44e0-9b8f-0425f048e088-utilities\") pod \"redhat-operators-rchcx\" (UID: \"ab85123c-6016-44e0-9b8f-0425f048e088\") " pod="openshift-marketplace/redhat-operators-rchcx" Dec 06 07:53:47 crc kubenswrapper[4823]: I1206 07:53:47.243957 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab85123c-6016-44e0-9b8f-0425f048e088-catalog-content\") pod \"redhat-operators-rchcx\" (UID: \"ab85123c-6016-44e0-9b8f-0425f048e088\") " pod="openshift-marketplace/redhat-operators-rchcx" Dec 06 07:53:47 crc kubenswrapper[4823]: I1206 07:53:47.243998 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llcmw\" (UniqueName: \"kubernetes.io/projected/ab85123c-6016-44e0-9b8f-0425f048e088-kube-api-access-llcmw\") pod \"redhat-operators-rchcx\" (UID: \"ab85123c-6016-44e0-9b8f-0425f048e088\") " pod="openshift-marketplace/redhat-operators-rchcx" Dec 06 07:53:47 crc kubenswrapper[4823]: I1206 07:53:47.345868 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab85123c-6016-44e0-9b8f-0425f048e088-utilities\") pod \"redhat-operators-rchcx\" (UID: \"ab85123c-6016-44e0-9b8f-0425f048e088\") " pod="openshift-marketplace/redhat-operators-rchcx" Dec 06 07:53:47 crc kubenswrapper[4823]: I1206 07:53:47.346115 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab85123c-6016-44e0-9b8f-0425f048e088-catalog-content\") pod \"redhat-operators-rchcx\" (UID: \"ab85123c-6016-44e0-9b8f-0425f048e088\") " pod="openshift-marketplace/redhat-operators-rchcx" Dec 06 07:53:47 crc kubenswrapper[4823]: I1206 07:53:47.346166 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llcmw\" (UniqueName: \"kubernetes.io/projected/ab85123c-6016-44e0-9b8f-0425f048e088-kube-api-access-llcmw\") pod \"redhat-operators-rchcx\" (UID: \"ab85123c-6016-44e0-9b8f-0425f048e088\") " pod="openshift-marketplace/redhat-operators-rchcx" Dec 06 07:53:47 crc kubenswrapper[4823]: I1206 07:53:47.346302 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab85123c-6016-44e0-9b8f-0425f048e088-utilities\") pod \"redhat-operators-rchcx\" (UID: \"ab85123c-6016-44e0-9b8f-0425f048e088\") " pod="openshift-marketplace/redhat-operators-rchcx" Dec 06 07:53:47 crc kubenswrapper[4823]: I1206 07:53:47.346545 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab85123c-6016-44e0-9b8f-0425f048e088-catalog-content\") pod \"redhat-operators-rchcx\" (UID: \"ab85123c-6016-44e0-9b8f-0425f048e088\") " pod="openshift-marketplace/redhat-operators-rchcx" Dec 06 07:53:47 crc kubenswrapper[4823]: I1206 07:53:47.374820 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llcmw\" (UniqueName: \"kubernetes.io/projected/ab85123c-6016-44e0-9b8f-0425f048e088-kube-api-access-llcmw\") pod \"redhat-operators-rchcx\" (UID: \"ab85123c-6016-44e0-9b8f-0425f048e088\") " pod="openshift-marketplace/redhat-operators-rchcx" Dec 06 07:53:47 crc kubenswrapper[4823]: I1206 07:53:47.393731 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rchcx" Dec 06 07:53:47 crc kubenswrapper[4823]: I1206 07:53:47.937852 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rchcx"] Dec 06 07:53:48 crc kubenswrapper[4823]: I1206 07:53:48.392899 4823 generic.go:334] "Generic (PLEG): container finished" podID="ab85123c-6016-44e0-9b8f-0425f048e088" containerID="8b4f5d6f7c38a8f4df0b21a1dfdbe218d0314dd64a55b60a1317bf973debcfeb" exitCode=0 Dec 06 07:53:48 crc kubenswrapper[4823]: I1206 07:53:48.392941 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rchcx" event={"ID":"ab85123c-6016-44e0-9b8f-0425f048e088","Type":"ContainerDied","Data":"8b4f5d6f7c38a8f4df0b21a1dfdbe218d0314dd64a55b60a1317bf973debcfeb"} Dec 06 07:53:48 crc kubenswrapper[4823]: I1206 07:53:48.392968 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rchcx" event={"ID":"ab85123c-6016-44e0-9b8f-0425f048e088","Type":"ContainerStarted","Data":"8380d5c8f6a85bed3f9812a1b731893d28e2ab90c3fa50eb209d07f92fa02f28"} Dec 06 07:53:48 crc kubenswrapper[4823]: I1206 07:53:48.396432 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 07:53:49 crc kubenswrapper[4823]: I1206 07:53:49.151617 4823 scope.go:117] "RemoveContainer" containerID="6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" Dec 06 07:53:49 crc kubenswrapper[4823]: E1206 07:53:49.152121 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:53:50 crc kubenswrapper[4823]: I1206 07:53:50.414362 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rchcx" event={"ID":"ab85123c-6016-44e0-9b8f-0425f048e088","Type":"ContainerStarted","Data":"8d802de235e84be44a5d8a2034671a357c8510fb584bd531874455e3db12846a"} Dec 06 07:53:59 crc kubenswrapper[4823]: I1206 07:53:59.506807 4823 generic.go:334] "Generic (PLEG): container finished" podID="ab85123c-6016-44e0-9b8f-0425f048e088" containerID="8d802de235e84be44a5d8a2034671a357c8510fb584bd531874455e3db12846a" exitCode=0 Dec 06 07:53:59 crc kubenswrapper[4823]: I1206 07:53:59.506890 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rchcx" event={"ID":"ab85123c-6016-44e0-9b8f-0425f048e088","Type":"ContainerDied","Data":"8d802de235e84be44a5d8a2034671a357c8510fb584bd531874455e3db12846a"} Dec 06 07:54:00 crc kubenswrapper[4823]: I1206 07:54:00.141357 4823 scope.go:117] "RemoveContainer" containerID="6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" Dec 06 07:54:00 crc kubenswrapper[4823]: E1206 07:54:00.141741 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 07:54:01 crc kubenswrapper[4823]: I1206 07:54:01.533564 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rchcx" event={"ID":"ab85123c-6016-44e0-9b8f-0425f048e088","Type":"ContainerStarted","Data":"a2bde41e6d841006dd0c088ce685623e24107401978b21dc281324ecc8e743aa"} Dec 06 07:54:01 crc kubenswrapper[4823]: I1206 07:54:01.568483 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rchcx" podStartSLOduration=1.7110436610000002 podStartE2EDuration="14.568459368s" podCreationTimestamp="2025-12-06 07:53:47 +0000 UTC" firstStartedPulling="2025-12-06 07:53:48.396167059 +0000 UTC m=+5329.681919019" lastFinishedPulling="2025-12-06 07:54:01.253582766 +0000 UTC m=+5342.539334726" observedRunningTime="2025-12-06 07:54:01.558190933 +0000 UTC m=+5342.843942893" watchObservedRunningTime="2025-12-06 07:54:01.568459368 +0000 UTC m=+5342.854211328" Dec 06 07:54:07 crc kubenswrapper[4823]: I1206 07:54:07.394496 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rchcx" Dec 06 07:54:07 crc kubenswrapper[4823]: I1206 07:54:07.396749 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rchcx" Dec 06 07:54:08 crc kubenswrapper[4823]: I1206 07:54:08.445645 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rchcx" podUID="ab85123c-6016-44e0-9b8f-0425f048e088" containerName="registry-server" probeResult="failure" output=< Dec 06 07:54:08 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Dec 06 07:54:08 crc kubenswrapper[4823]: > Dec 06 07:54:15 crc kubenswrapper[4823]: I1206 07:54:15.141530 4823 scope.go:117] "RemoveContainer" containerID="6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" Dec 06 07:54:15 crc kubenswrapper[4823]: I1206 07:54:15.662049 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerStarted","Data":"d09e5fba844f65ee736c5dd962cfbb99aa4b53f2457eee3f1cdf852aaf76a107"} Dec 06 07:54:17 crc kubenswrapper[4823]: I1206 07:54:17.446217 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rchcx" Dec 06 07:54:17 crc kubenswrapper[4823]: I1206 07:54:17.501071 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rchcx" Dec 06 07:54:18 crc kubenswrapper[4823]: I1206 07:54:18.298604 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rchcx"] Dec 06 07:54:18 crc kubenswrapper[4823]: I1206 07:54:18.688099 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rchcx" podUID="ab85123c-6016-44e0-9b8f-0425f048e088" containerName="registry-server" containerID="cri-o://a2bde41e6d841006dd0c088ce685623e24107401978b21dc281324ecc8e743aa" gracePeriod=2 Dec 06 07:54:19 crc kubenswrapper[4823]: I1206 07:54:19.214085 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rchcx" Dec 06 07:54:19 crc kubenswrapper[4823]: I1206 07:54:19.386867 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llcmw\" (UniqueName: \"kubernetes.io/projected/ab85123c-6016-44e0-9b8f-0425f048e088-kube-api-access-llcmw\") pod \"ab85123c-6016-44e0-9b8f-0425f048e088\" (UID: \"ab85123c-6016-44e0-9b8f-0425f048e088\") " Dec 06 07:54:19 crc kubenswrapper[4823]: I1206 07:54:19.387076 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab85123c-6016-44e0-9b8f-0425f048e088-utilities\") pod \"ab85123c-6016-44e0-9b8f-0425f048e088\" (UID: \"ab85123c-6016-44e0-9b8f-0425f048e088\") " Dec 06 07:54:19 crc kubenswrapper[4823]: I1206 07:54:19.387148 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab85123c-6016-44e0-9b8f-0425f048e088-catalog-content\") pod \"ab85123c-6016-44e0-9b8f-0425f048e088\" (UID: \"ab85123c-6016-44e0-9b8f-0425f048e088\") " Dec 06 07:54:19 crc kubenswrapper[4823]: I1206 07:54:19.388754 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab85123c-6016-44e0-9b8f-0425f048e088-utilities" (OuterVolumeSpecName: "utilities") pod "ab85123c-6016-44e0-9b8f-0425f048e088" (UID: "ab85123c-6016-44e0-9b8f-0425f048e088"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:54:19 crc kubenswrapper[4823]: I1206 07:54:19.404705 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab85123c-6016-44e0-9b8f-0425f048e088-kube-api-access-llcmw" (OuterVolumeSpecName: "kube-api-access-llcmw") pod "ab85123c-6016-44e0-9b8f-0425f048e088" (UID: "ab85123c-6016-44e0-9b8f-0425f048e088"). InnerVolumeSpecName "kube-api-access-llcmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:54:19 crc kubenswrapper[4823]: I1206 07:54:19.489323 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llcmw\" (UniqueName: \"kubernetes.io/projected/ab85123c-6016-44e0-9b8f-0425f048e088-kube-api-access-llcmw\") on node \"crc\" DevicePath \"\"" Dec 06 07:54:19 crc kubenswrapper[4823]: I1206 07:54:19.489358 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab85123c-6016-44e0-9b8f-0425f048e088-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:54:19 crc kubenswrapper[4823]: I1206 07:54:19.531113 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab85123c-6016-44e0-9b8f-0425f048e088-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab85123c-6016-44e0-9b8f-0425f048e088" (UID: "ab85123c-6016-44e0-9b8f-0425f048e088"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:54:19 crc kubenswrapper[4823]: I1206 07:54:19.591173 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab85123c-6016-44e0-9b8f-0425f048e088-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:54:19 crc kubenswrapper[4823]: I1206 07:54:19.700092 4823 generic.go:334] "Generic (PLEG): container finished" podID="ab85123c-6016-44e0-9b8f-0425f048e088" containerID="a2bde41e6d841006dd0c088ce685623e24107401978b21dc281324ecc8e743aa" exitCode=0 Dec 06 07:54:19 crc kubenswrapper[4823]: I1206 07:54:19.700141 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rchcx" event={"ID":"ab85123c-6016-44e0-9b8f-0425f048e088","Type":"ContainerDied","Data":"a2bde41e6d841006dd0c088ce685623e24107401978b21dc281324ecc8e743aa"} Dec 06 07:54:19 crc kubenswrapper[4823]: I1206 07:54:19.700170 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rchcx" event={"ID":"ab85123c-6016-44e0-9b8f-0425f048e088","Type":"ContainerDied","Data":"8380d5c8f6a85bed3f9812a1b731893d28e2ab90c3fa50eb209d07f92fa02f28"} Dec 06 07:54:19 crc kubenswrapper[4823]: I1206 07:54:19.700178 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rchcx" Dec 06 07:54:19 crc kubenswrapper[4823]: I1206 07:54:19.700189 4823 scope.go:117] "RemoveContainer" containerID="a2bde41e6d841006dd0c088ce685623e24107401978b21dc281324ecc8e743aa" Dec 06 07:54:19 crc kubenswrapper[4823]: I1206 07:54:19.727979 4823 scope.go:117] "RemoveContainer" containerID="8d802de235e84be44a5d8a2034671a357c8510fb584bd531874455e3db12846a" Dec 06 07:54:19 crc kubenswrapper[4823]: I1206 07:54:19.741226 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rchcx"] Dec 06 07:54:19 crc kubenswrapper[4823]: I1206 07:54:19.750623 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rchcx"] Dec 06 07:54:19 crc kubenswrapper[4823]: I1206 07:54:19.775624 4823 scope.go:117] "RemoveContainer" containerID="8b4f5d6f7c38a8f4df0b21a1dfdbe218d0314dd64a55b60a1317bf973debcfeb" Dec 06 07:54:19 crc kubenswrapper[4823]: I1206 07:54:19.805433 4823 scope.go:117] "RemoveContainer" containerID="a2bde41e6d841006dd0c088ce685623e24107401978b21dc281324ecc8e743aa" Dec 06 07:54:19 crc kubenswrapper[4823]: E1206 07:54:19.806070 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2bde41e6d841006dd0c088ce685623e24107401978b21dc281324ecc8e743aa\": container with ID starting with a2bde41e6d841006dd0c088ce685623e24107401978b21dc281324ecc8e743aa not found: ID does not exist" containerID="a2bde41e6d841006dd0c088ce685623e24107401978b21dc281324ecc8e743aa" Dec 06 07:54:19 crc kubenswrapper[4823]: I1206 07:54:19.806346 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2bde41e6d841006dd0c088ce685623e24107401978b21dc281324ecc8e743aa"} err="failed to get container status \"a2bde41e6d841006dd0c088ce685623e24107401978b21dc281324ecc8e743aa\": rpc error: code = NotFound desc = could not find container \"a2bde41e6d841006dd0c088ce685623e24107401978b21dc281324ecc8e743aa\": container with ID starting with a2bde41e6d841006dd0c088ce685623e24107401978b21dc281324ecc8e743aa not found: ID does not exist" Dec 06 07:54:19 crc kubenswrapper[4823]: I1206 07:54:19.806386 4823 scope.go:117] "RemoveContainer" containerID="8d802de235e84be44a5d8a2034671a357c8510fb584bd531874455e3db12846a" Dec 06 07:54:19 crc kubenswrapper[4823]: E1206 07:54:19.808111 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d802de235e84be44a5d8a2034671a357c8510fb584bd531874455e3db12846a\": container with ID starting with 8d802de235e84be44a5d8a2034671a357c8510fb584bd531874455e3db12846a not found: ID does not exist" containerID="8d802de235e84be44a5d8a2034671a357c8510fb584bd531874455e3db12846a" Dec 06 07:54:19 crc kubenswrapper[4823]: I1206 07:54:19.808147 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d802de235e84be44a5d8a2034671a357c8510fb584bd531874455e3db12846a"} err="failed to get container status \"8d802de235e84be44a5d8a2034671a357c8510fb584bd531874455e3db12846a\": rpc error: code = NotFound desc = could not find container \"8d802de235e84be44a5d8a2034671a357c8510fb584bd531874455e3db12846a\": container with ID starting with 8d802de235e84be44a5d8a2034671a357c8510fb584bd531874455e3db12846a not found: ID does not exist" Dec 06 07:54:19 crc kubenswrapper[4823]: I1206 07:54:19.808170 4823 scope.go:117] "RemoveContainer" containerID="8b4f5d6f7c38a8f4df0b21a1dfdbe218d0314dd64a55b60a1317bf973debcfeb" Dec 06 07:54:19 crc kubenswrapper[4823]: E1206 07:54:19.808399 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b4f5d6f7c38a8f4df0b21a1dfdbe218d0314dd64a55b60a1317bf973debcfeb\": container with ID starting with 8b4f5d6f7c38a8f4df0b21a1dfdbe218d0314dd64a55b60a1317bf973debcfeb not found: ID does not exist" containerID="8b4f5d6f7c38a8f4df0b21a1dfdbe218d0314dd64a55b60a1317bf973debcfeb" Dec 06 07:54:19 crc kubenswrapper[4823]: I1206 07:54:19.808434 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b4f5d6f7c38a8f4df0b21a1dfdbe218d0314dd64a55b60a1317bf973debcfeb"} err="failed to get container status \"8b4f5d6f7c38a8f4df0b21a1dfdbe218d0314dd64a55b60a1317bf973debcfeb\": rpc error: code = NotFound desc = could not find container \"8b4f5d6f7c38a8f4df0b21a1dfdbe218d0314dd64a55b60a1317bf973debcfeb\": container with ID starting with 8b4f5d6f7c38a8f4df0b21a1dfdbe218d0314dd64a55b60a1317bf973debcfeb not found: ID does not exist" Dec 06 07:54:21 crc kubenswrapper[4823]: I1206 07:54:21.151209 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab85123c-6016-44e0-9b8f-0425f048e088" path="/var/lib/kubelet/pods/ab85123c-6016-44e0-9b8f-0425f048e088/volumes" Dec 06 07:55:49 crc kubenswrapper[4823]: I1206 07:55:49.072531 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z9ztm"] Dec 06 07:55:49 crc kubenswrapper[4823]: E1206 07:55:49.073515 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab85123c-6016-44e0-9b8f-0425f048e088" containerName="extract-utilities" Dec 06 07:55:49 crc kubenswrapper[4823]: I1206 07:55:49.073530 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab85123c-6016-44e0-9b8f-0425f048e088" containerName="extract-utilities" Dec 06 07:55:49 crc kubenswrapper[4823]: E1206 07:55:49.073544 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab85123c-6016-44e0-9b8f-0425f048e088" containerName="registry-server" Dec 06 07:55:49 crc kubenswrapper[4823]: I1206 07:55:49.073550 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab85123c-6016-44e0-9b8f-0425f048e088" containerName="registry-server" Dec 06 07:55:49 crc kubenswrapper[4823]: E1206 07:55:49.073571 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab85123c-6016-44e0-9b8f-0425f048e088" containerName="extract-content" Dec 06 07:55:49 crc kubenswrapper[4823]: I1206 07:55:49.073577 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab85123c-6016-44e0-9b8f-0425f048e088" containerName="extract-content" Dec 06 07:55:49 crc kubenswrapper[4823]: I1206 07:55:49.073771 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab85123c-6016-44e0-9b8f-0425f048e088" containerName="registry-server" Dec 06 07:55:49 crc kubenswrapper[4823]: I1206 07:55:49.075320 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z9ztm" Dec 06 07:55:49 crc kubenswrapper[4823]: I1206 07:55:49.093308 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z9ztm"] Dec 06 07:55:49 crc kubenswrapper[4823]: I1206 07:55:49.135714 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9-utilities\") pod \"community-operators-z9ztm\" (UID: \"9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9\") " pod="openshift-marketplace/community-operators-z9ztm" Dec 06 07:55:49 crc kubenswrapper[4823]: I1206 07:55:49.135832 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9-catalog-content\") pod \"community-operators-z9ztm\" (UID: \"9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9\") " pod="openshift-marketplace/community-operators-z9ztm" Dec 06 07:55:49 crc kubenswrapper[4823]: I1206 07:55:49.135853 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8zsh\" (UniqueName: \"kubernetes.io/projected/9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9-kube-api-access-p8zsh\") pod \"community-operators-z9ztm\" (UID: \"9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9\") " pod="openshift-marketplace/community-operators-z9ztm" Dec 06 07:55:49 crc kubenswrapper[4823]: I1206 07:55:49.238381 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9-utilities\") pod \"community-operators-z9ztm\" (UID: \"9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9\") " pod="openshift-marketplace/community-operators-z9ztm" Dec 06 07:55:49 crc kubenswrapper[4823]: I1206 07:55:49.238890 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9-catalog-content\") pod \"community-operators-z9ztm\" (UID: \"9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9\") " pod="openshift-marketplace/community-operators-z9ztm" Dec 06 07:55:49 crc kubenswrapper[4823]: I1206 07:55:49.238986 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8zsh\" (UniqueName: \"kubernetes.io/projected/9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9-kube-api-access-p8zsh\") pod \"community-operators-z9ztm\" (UID: \"9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9\") " pod="openshift-marketplace/community-operators-z9ztm" Dec 06 07:55:49 crc kubenswrapper[4823]: I1206 07:55:49.239311 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9-utilities\") pod \"community-operators-z9ztm\" (UID: \"9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9\") " pod="openshift-marketplace/community-operators-z9ztm" Dec 06 07:55:49 crc kubenswrapper[4823]: I1206 07:55:49.240293 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9-catalog-content\") pod \"community-operators-z9ztm\" (UID: \"9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9\") " pod="openshift-marketplace/community-operators-z9ztm" Dec 06 07:55:49 crc kubenswrapper[4823]: I1206 07:55:49.263745 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8zsh\" (UniqueName: \"kubernetes.io/projected/9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9-kube-api-access-p8zsh\") pod \"community-operators-z9ztm\" (UID: \"9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9\") " pod="openshift-marketplace/community-operators-z9ztm" Dec 06 07:55:49 crc kubenswrapper[4823]: I1206 07:55:49.405200 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z9ztm" Dec 06 07:55:49 crc kubenswrapper[4823]: I1206 07:55:49.970703 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z9ztm"] Dec 06 07:55:50 crc kubenswrapper[4823]: I1206 07:55:50.645438 4823 generic.go:334] "Generic (PLEG): container finished" podID="9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9" containerID="13c5832b58e7f9a396cab22c827b1da7d18169f0ef636e9cf0cdbc7fd641b5c1" exitCode=0 Dec 06 07:55:50 crc kubenswrapper[4823]: I1206 07:55:50.645549 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9ztm" event={"ID":"9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9","Type":"ContainerDied","Data":"13c5832b58e7f9a396cab22c827b1da7d18169f0ef636e9cf0cdbc7fd641b5c1"} Dec 06 07:55:50 crc kubenswrapper[4823]: I1206 07:55:50.645847 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9ztm" event={"ID":"9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9","Type":"ContainerStarted","Data":"33e578d12d48a1bbf047233d5818ddce4f60cd4874a4f58dce9e360ab4b733dc"} Dec 06 07:55:52 crc kubenswrapper[4823]: I1206 07:55:52.667998 4823 generic.go:334] "Generic (PLEG): container finished" podID="9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9" containerID="e4dba459cd7799cfd78b4a41e9f020adea87ee347fc12091d7cb996045ece8f4" exitCode=0 Dec 06 07:55:52 crc kubenswrapper[4823]: I1206 07:55:52.668355 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9ztm" event={"ID":"9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9","Type":"ContainerDied","Data":"e4dba459cd7799cfd78b4a41e9f020adea87ee347fc12091d7cb996045ece8f4"} Dec 06 07:55:53 crc kubenswrapper[4823]: I1206 07:55:53.681051 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9ztm" event={"ID":"9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9","Type":"ContainerStarted","Data":"9d79479fd73e7350df46e560c057917560af70d82503f400e3ac97a0828fcf43"} Dec 06 07:55:53 crc kubenswrapper[4823]: I1206 07:55:53.700872 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z9ztm" podStartSLOduration=2.056839087 podStartE2EDuration="4.700839959s" podCreationTimestamp="2025-12-06 07:55:49 +0000 UTC" firstStartedPulling="2025-12-06 07:55:50.6491425 +0000 UTC m=+5451.934894470" lastFinishedPulling="2025-12-06 07:55:53.293143382 +0000 UTC m=+5454.578895342" observedRunningTime="2025-12-06 07:55:53.699324545 +0000 UTC m=+5454.985076525" watchObservedRunningTime="2025-12-06 07:55:53.700839959 +0000 UTC m=+5454.986591919" Dec 06 07:55:59 crc kubenswrapper[4823]: I1206 07:55:59.406263 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z9ztm" Dec 06 07:55:59 crc kubenswrapper[4823]: I1206 07:55:59.407076 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z9ztm" Dec 06 07:55:59 crc kubenswrapper[4823]: I1206 07:55:59.459053 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z9ztm" Dec 06 07:55:59 crc kubenswrapper[4823]: I1206 07:55:59.797084 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z9ztm" Dec 06 07:55:59 crc kubenswrapper[4823]: I1206 07:55:59.857126 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z9ztm"] Dec 06 07:56:01 crc kubenswrapper[4823]: I1206 07:56:01.779131 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z9ztm" podUID="9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9" containerName="registry-server" containerID="cri-o://9d79479fd73e7350df46e560c057917560af70d82503f400e3ac97a0828fcf43" gracePeriod=2 Dec 06 07:56:02 crc kubenswrapper[4823]: I1206 07:56:02.455343 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z9ztm" Dec 06 07:56:02 crc kubenswrapper[4823]: I1206 07:56:02.580592 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8zsh\" (UniqueName: \"kubernetes.io/projected/9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9-kube-api-access-p8zsh\") pod \"9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9\" (UID: \"9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9\") " Dec 06 07:56:02 crc kubenswrapper[4823]: I1206 07:56:02.580804 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9-catalog-content\") pod \"9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9\" (UID: \"9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9\") " Dec 06 07:56:02 crc kubenswrapper[4823]: I1206 07:56:02.580889 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9-utilities\") pod \"9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9\" (UID: \"9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9\") " Dec 06 07:56:02 crc kubenswrapper[4823]: I1206 07:56:02.581625 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9-utilities" (OuterVolumeSpecName: "utilities") pod "9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9" (UID: "9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:56:02 crc kubenswrapper[4823]: I1206 07:56:02.586906 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9-kube-api-access-p8zsh" (OuterVolumeSpecName: "kube-api-access-p8zsh") pod "9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9" (UID: "9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9"). InnerVolumeSpecName "kube-api-access-p8zsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:56:02 crc kubenswrapper[4823]: I1206 07:56:02.638640 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9" (UID: "9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:56:02 crc kubenswrapper[4823]: I1206 07:56:02.682931 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8zsh\" (UniqueName: \"kubernetes.io/projected/9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9-kube-api-access-p8zsh\") on node \"crc\" DevicePath \"\"" Dec 06 07:56:02 crc kubenswrapper[4823]: I1206 07:56:02.682960 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:56:02 crc kubenswrapper[4823]: I1206 07:56:02.682971 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:56:02 crc kubenswrapper[4823]: I1206 07:56:02.808015 4823 generic.go:334] "Generic (PLEG): container finished" podID="9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9" containerID="9d79479fd73e7350df46e560c057917560af70d82503f400e3ac97a0828fcf43" exitCode=0 Dec 06 07:56:02 crc kubenswrapper[4823]: I1206 07:56:02.808075 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9ztm" event={"ID":"9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9","Type":"ContainerDied","Data":"9d79479fd73e7350df46e560c057917560af70d82503f400e3ac97a0828fcf43"} Dec 06 07:56:02 crc kubenswrapper[4823]: I1206 07:56:02.808125 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9ztm" event={"ID":"9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9","Type":"ContainerDied","Data":"33e578d12d48a1bbf047233d5818ddce4f60cd4874a4f58dce9e360ab4b733dc"} Dec 06 07:56:02 crc kubenswrapper[4823]: I1206 07:56:02.808151 4823 scope.go:117] "RemoveContainer" containerID="9d79479fd73e7350df46e560c057917560af70d82503f400e3ac97a0828fcf43" Dec 06 07:56:02 crc kubenswrapper[4823]: I1206 07:56:02.808206 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z9ztm" Dec 06 07:56:02 crc kubenswrapper[4823]: I1206 07:56:02.846650 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z9ztm"] Dec 06 07:56:02 crc kubenswrapper[4823]: I1206 07:56:02.856304 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z9ztm"] Dec 06 07:56:02 crc kubenswrapper[4823]: I1206 07:56:02.863873 4823 scope.go:117] "RemoveContainer" containerID="e4dba459cd7799cfd78b4a41e9f020adea87ee347fc12091d7cb996045ece8f4" Dec 06 07:56:02 crc kubenswrapper[4823]: I1206 07:56:02.885242 4823 scope.go:117] "RemoveContainer" containerID="13c5832b58e7f9a396cab22c827b1da7d18169f0ef636e9cf0cdbc7fd641b5c1" Dec 06 07:56:02 crc kubenswrapper[4823]: I1206 07:56:02.952939 4823 scope.go:117] "RemoveContainer" containerID="9d79479fd73e7350df46e560c057917560af70d82503f400e3ac97a0828fcf43" Dec 06 07:56:02 crc kubenswrapper[4823]: E1206 07:56:02.953750 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d79479fd73e7350df46e560c057917560af70d82503f400e3ac97a0828fcf43\": container with ID starting with 9d79479fd73e7350df46e560c057917560af70d82503f400e3ac97a0828fcf43 not found: ID does not exist" containerID="9d79479fd73e7350df46e560c057917560af70d82503f400e3ac97a0828fcf43" Dec 06 07:56:02 crc kubenswrapper[4823]: I1206 07:56:02.953923 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d79479fd73e7350df46e560c057917560af70d82503f400e3ac97a0828fcf43"} err="failed to get container status \"9d79479fd73e7350df46e560c057917560af70d82503f400e3ac97a0828fcf43\": rpc error: code = NotFound desc = could not find container \"9d79479fd73e7350df46e560c057917560af70d82503f400e3ac97a0828fcf43\": container with ID starting with 9d79479fd73e7350df46e560c057917560af70d82503f400e3ac97a0828fcf43 not found: ID does not exist" Dec 06 07:56:02 crc kubenswrapper[4823]: I1206 07:56:02.953955 4823 scope.go:117] "RemoveContainer" containerID="e4dba459cd7799cfd78b4a41e9f020adea87ee347fc12091d7cb996045ece8f4" Dec 06 07:56:02 crc kubenswrapper[4823]: E1206 07:56:02.954987 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4dba459cd7799cfd78b4a41e9f020adea87ee347fc12091d7cb996045ece8f4\": container with ID starting with e4dba459cd7799cfd78b4a41e9f020adea87ee347fc12091d7cb996045ece8f4 not found: ID does not exist" containerID="e4dba459cd7799cfd78b4a41e9f020adea87ee347fc12091d7cb996045ece8f4" Dec 06 07:56:02 crc kubenswrapper[4823]: I1206 07:56:02.955073 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4dba459cd7799cfd78b4a41e9f020adea87ee347fc12091d7cb996045ece8f4"} err="failed to get container status \"e4dba459cd7799cfd78b4a41e9f020adea87ee347fc12091d7cb996045ece8f4\": rpc error: code = NotFound desc = could not find container \"e4dba459cd7799cfd78b4a41e9f020adea87ee347fc12091d7cb996045ece8f4\": container with ID starting with e4dba459cd7799cfd78b4a41e9f020adea87ee347fc12091d7cb996045ece8f4 not found: ID does not exist" Dec 06 07:56:02 crc kubenswrapper[4823]: I1206 07:56:02.955123 4823 scope.go:117] "RemoveContainer" containerID="13c5832b58e7f9a396cab22c827b1da7d18169f0ef636e9cf0cdbc7fd641b5c1" Dec 06 07:56:02 crc kubenswrapper[4823]: E1206 07:56:02.955552 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13c5832b58e7f9a396cab22c827b1da7d18169f0ef636e9cf0cdbc7fd641b5c1\": container with ID starting with 13c5832b58e7f9a396cab22c827b1da7d18169f0ef636e9cf0cdbc7fd641b5c1 not found: ID does not exist" containerID="13c5832b58e7f9a396cab22c827b1da7d18169f0ef636e9cf0cdbc7fd641b5c1" Dec 06 07:56:02 crc kubenswrapper[4823]: I1206 07:56:02.955584 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13c5832b58e7f9a396cab22c827b1da7d18169f0ef636e9cf0cdbc7fd641b5c1"} err="failed to get container status \"13c5832b58e7f9a396cab22c827b1da7d18169f0ef636e9cf0cdbc7fd641b5c1\": rpc error: code = NotFound desc = could not find container \"13c5832b58e7f9a396cab22c827b1da7d18169f0ef636e9cf0cdbc7fd641b5c1\": container with ID starting with 13c5832b58e7f9a396cab22c827b1da7d18169f0ef636e9cf0cdbc7fd641b5c1 not found: ID does not exist" Dec 06 07:56:03 crc kubenswrapper[4823]: I1206 07:56:03.154016 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9" path="/var/lib/kubelet/pods/9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9/volumes" Dec 06 07:56:11 crc kubenswrapper[4823]: I1206 07:56:11.547455 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fp5ml"] Dec 06 07:56:11 crc kubenswrapper[4823]: E1206 07:56:11.548596 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9" containerName="extract-utilities" Dec 06 07:56:11 crc kubenswrapper[4823]: I1206 07:56:11.548615 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9" containerName="extract-utilities" Dec 06 07:56:11 crc kubenswrapper[4823]: E1206 07:56:11.548646 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9" containerName="registry-server" Dec 06 07:56:11 crc kubenswrapper[4823]: I1206 07:56:11.548654 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9" containerName="registry-server" Dec 06 07:56:11 crc kubenswrapper[4823]: E1206 07:56:11.548689 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9" containerName="extract-content" Dec 06 07:56:11 crc kubenswrapper[4823]: I1206 07:56:11.548698 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9" containerName="extract-content" Dec 06 07:56:11 crc kubenswrapper[4823]: I1206 07:56:11.548938 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f17eb3b-f6f8-48e5-b4cc-31ef7712c0a9" containerName="registry-server" Dec 06 07:56:11 crc kubenswrapper[4823]: I1206 07:56:11.552243 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fp5ml" Dec 06 07:56:11 crc kubenswrapper[4823]: I1206 07:56:11.578269 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fp5ml"] Dec 06 07:56:11 crc kubenswrapper[4823]: I1206 07:56:11.651019 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3082cf1-07cd-4a5a-860f-517063f353b1-utilities\") pod \"certified-operators-fp5ml\" (UID: \"d3082cf1-07cd-4a5a-860f-517063f353b1\") " pod="openshift-marketplace/certified-operators-fp5ml" Dec 06 07:56:11 crc kubenswrapper[4823]: I1206 07:56:11.651088 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3082cf1-07cd-4a5a-860f-517063f353b1-catalog-content\") pod \"certified-operators-fp5ml\" (UID: \"d3082cf1-07cd-4a5a-860f-517063f353b1\") " pod="openshift-marketplace/certified-operators-fp5ml" Dec 06 07:56:11 crc kubenswrapper[4823]: I1206 07:56:11.651368 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8gwv\" (UniqueName: \"kubernetes.io/projected/d3082cf1-07cd-4a5a-860f-517063f353b1-kube-api-access-d8gwv\") pod \"certified-operators-fp5ml\" (UID: \"d3082cf1-07cd-4a5a-860f-517063f353b1\") " pod="openshift-marketplace/certified-operators-fp5ml" Dec 06 07:56:11 crc kubenswrapper[4823]: I1206 07:56:11.754276 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3082cf1-07cd-4a5a-860f-517063f353b1-utilities\") pod \"certified-operators-fp5ml\" (UID: \"d3082cf1-07cd-4a5a-860f-517063f353b1\") " pod="openshift-marketplace/certified-operators-fp5ml" Dec 06 07:56:11 crc kubenswrapper[4823]: I1206 07:56:11.754344 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3082cf1-07cd-4a5a-860f-517063f353b1-catalog-content\") pod \"certified-operators-fp5ml\" (UID: \"d3082cf1-07cd-4a5a-860f-517063f353b1\") " pod="openshift-marketplace/certified-operators-fp5ml" Dec 06 07:56:11 crc kubenswrapper[4823]: I1206 07:56:11.754403 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8gwv\" (UniqueName: \"kubernetes.io/projected/d3082cf1-07cd-4a5a-860f-517063f353b1-kube-api-access-d8gwv\") pod \"certified-operators-fp5ml\" (UID: \"d3082cf1-07cd-4a5a-860f-517063f353b1\") " pod="openshift-marketplace/certified-operators-fp5ml" Dec 06 07:56:11 crc kubenswrapper[4823]: I1206 07:56:11.755342 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3082cf1-07cd-4a5a-860f-517063f353b1-catalog-content\") pod \"certified-operators-fp5ml\" (UID: \"d3082cf1-07cd-4a5a-860f-517063f353b1\") " pod="openshift-marketplace/certified-operators-fp5ml" Dec 06 07:56:11 crc kubenswrapper[4823]: I1206 07:56:11.755364 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3082cf1-07cd-4a5a-860f-517063f353b1-utilities\") pod \"certified-operators-fp5ml\" (UID: \"d3082cf1-07cd-4a5a-860f-517063f353b1\") " pod="openshift-marketplace/certified-operators-fp5ml" Dec 06 07:56:11 crc kubenswrapper[4823]: I1206 07:56:11.783612 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8gwv\" (UniqueName: \"kubernetes.io/projected/d3082cf1-07cd-4a5a-860f-517063f353b1-kube-api-access-d8gwv\") pod \"certified-operators-fp5ml\" (UID: \"d3082cf1-07cd-4a5a-860f-517063f353b1\") " pod="openshift-marketplace/certified-operators-fp5ml" Dec 06 07:56:11 crc kubenswrapper[4823]: I1206 07:56:11.889925 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fp5ml" Dec 06 07:56:12 crc kubenswrapper[4823]: I1206 07:56:12.653894 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fp5ml"] Dec 06 07:56:12 crc kubenswrapper[4823]: I1206 07:56:12.909123 4823 generic.go:334] "Generic (PLEG): container finished" podID="d3082cf1-07cd-4a5a-860f-517063f353b1" containerID="eb8c3cdeab9c20bd88fef73fb184031b683a24a3344252d5a8de852f7ffd4e80" exitCode=0 Dec 06 07:56:12 crc kubenswrapper[4823]: I1206 07:56:12.909165 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fp5ml" event={"ID":"d3082cf1-07cd-4a5a-860f-517063f353b1","Type":"ContainerDied","Data":"eb8c3cdeab9c20bd88fef73fb184031b683a24a3344252d5a8de852f7ffd4e80"} Dec 06 07:56:12 crc kubenswrapper[4823]: I1206 07:56:12.909191 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fp5ml" event={"ID":"d3082cf1-07cd-4a5a-860f-517063f353b1","Type":"ContainerStarted","Data":"d059001fcc0b92788198b445505f45345ba12704348f32e0a28a3c19c3365cb0"} Dec 06 07:56:13 crc kubenswrapper[4823]: I1206 07:56:13.919550 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fp5ml" event={"ID":"d3082cf1-07cd-4a5a-860f-517063f353b1","Type":"ContainerStarted","Data":"2a0a5da3a3e47b5d7fa9845518de0197498d8378f1296f60ba787347e8330dab"} Dec 06 07:56:14 crc kubenswrapper[4823]: I1206 07:56:14.931174 4823 generic.go:334] "Generic (PLEG): container finished" podID="d3082cf1-07cd-4a5a-860f-517063f353b1" containerID="2a0a5da3a3e47b5d7fa9845518de0197498d8378f1296f60ba787347e8330dab" exitCode=0 Dec 06 07:56:14 crc kubenswrapper[4823]: I1206 07:56:14.931265 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fp5ml" event={"ID":"d3082cf1-07cd-4a5a-860f-517063f353b1","Type":"ContainerDied","Data":"2a0a5da3a3e47b5d7fa9845518de0197498d8378f1296f60ba787347e8330dab"} Dec 06 07:56:15 crc kubenswrapper[4823]: I1206 07:56:15.943574 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fp5ml" event={"ID":"d3082cf1-07cd-4a5a-860f-517063f353b1","Type":"ContainerStarted","Data":"4d39168d086122e7e2a33593d6a594944585a5075079713d75b7b19f53352418"} Dec 06 07:56:15 crc kubenswrapper[4823]: I1206 07:56:15.968359 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fp5ml" podStartSLOduration=2.571197743 podStartE2EDuration="4.968340417s" podCreationTimestamp="2025-12-06 07:56:11 +0000 UTC" firstStartedPulling="2025-12-06 07:56:12.91150371 +0000 UTC m=+5474.197255670" lastFinishedPulling="2025-12-06 07:56:15.308646384 +0000 UTC m=+5476.594398344" observedRunningTime="2025-12-06 07:56:15.962992593 +0000 UTC m=+5477.248744553" watchObservedRunningTime="2025-12-06 07:56:15.968340417 +0000 UTC m=+5477.254092377" Dec 06 07:56:21 crc kubenswrapper[4823]: I1206 07:56:21.891048 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fp5ml" Dec 06 07:56:21 crc kubenswrapper[4823]: I1206 07:56:21.891634 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fp5ml" Dec 06 07:56:21 crc kubenswrapper[4823]: I1206 07:56:21.939549 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fp5ml" Dec 06 07:56:22 crc kubenswrapper[4823]: I1206 07:56:22.033723 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fp5ml" Dec 06 07:56:22 crc kubenswrapper[4823]: I1206 07:56:22.184038 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fp5ml"] Dec 06 07:56:24 crc kubenswrapper[4823]: I1206 07:56:24.025877 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fp5ml" podUID="d3082cf1-07cd-4a5a-860f-517063f353b1" containerName="registry-server" containerID="cri-o://4d39168d086122e7e2a33593d6a594944585a5075079713d75b7b19f53352418" gracePeriod=2 Dec 06 07:56:24 crc kubenswrapper[4823]: I1206 07:56:24.526209 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fp5ml" Dec 06 07:56:24 crc kubenswrapper[4823]: I1206 07:56:24.656647 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3082cf1-07cd-4a5a-860f-517063f353b1-catalog-content\") pod \"d3082cf1-07cd-4a5a-860f-517063f353b1\" (UID: \"d3082cf1-07cd-4a5a-860f-517063f353b1\") " Dec 06 07:56:24 crc kubenswrapper[4823]: I1206 07:56:24.656890 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3082cf1-07cd-4a5a-860f-517063f353b1-utilities\") pod \"d3082cf1-07cd-4a5a-860f-517063f353b1\" (UID: \"d3082cf1-07cd-4a5a-860f-517063f353b1\") " Dec 06 07:56:24 crc kubenswrapper[4823]: I1206 07:56:24.656983 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8gwv\" (UniqueName: \"kubernetes.io/projected/d3082cf1-07cd-4a5a-860f-517063f353b1-kube-api-access-d8gwv\") pod \"d3082cf1-07cd-4a5a-860f-517063f353b1\" (UID: \"d3082cf1-07cd-4a5a-860f-517063f353b1\") " Dec 06 07:56:24 crc kubenswrapper[4823]: I1206 07:56:24.657622 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3082cf1-07cd-4a5a-860f-517063f353b1-utilities" (OuterVolumeSpecName: "utilities") pod "d3082cf1-07cd-4a5a-860f-517063f353b1" (UID: "d3082cf1-07cd-4a5a-860f-517063f353b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:56:24 crc kubenswrapper[4823]: I1206 07:56:24.666880 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3082cf1-07cd-4a5a-860f-517063f353b1-kube-api-access-d8gwv" (OuterVolumeSpecName: "kube-api-access-d8gwv") pod "d3082cf1-07cd-4a5a-860f-517063f353b1" (UID: "d3082cf1-07cd-4a5a-860f-517063f353b1"). InnerVolumeSpecName "kube-api-access-d8gwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:56:24 crc kubenswrapper[4823]: I1206 07:56:24.714441 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3082cf1-07cd-4a5a-860f-517063f353b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3082cf1-07cd-4a5a-860f-517063f353b1" (UID: "d3082cf1-07cd-4a5a-860f-517063f353b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:56:24 crc kubenswrapper[4823]: I1206 07:56:24.760085 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8gwv\" (UniqueName: \"kubernetes.io/projected/d3082cf1-07cd-4a5a-860f-517063f353b1-kube-api-access-d8gwv\") on node \"crc\" DevicePath \"\"" Dec 06 07:56:24 crc kubenswrapper[4823]: I1206 07:56:24.760129 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3082cf1-07cd-4a5a-860f-517063f353b1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:56:24 crc kubenswrapper[4823]: I1206 07:56:24.760142 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3082cf1-07cd-4a5a-860f-517063f353b1-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:56:25 crc kubenswrapper[4823]: I1206 07:56:25.037859 4823 generic.go:334] "Generic (PLEG): container finished" podID="d3082cf1-07cd-4a5a-860f-517063f353b1" containerID="4d39168d086122e7e2a33593d6a594944585a5075079713d75b7b19f53352418" exitCode=0 Dec 06 07:56:25 crc kubenswrapper[4823]: I1206 07:56:25.037916 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fp5ml" event={"ID":"d3082cf1-07cd-4a5a-860f-517063f353b1","Type":"ContainerDied","Data":"4d39168d086122e7e2a33593d6a594944585a5075079713d75b7b19f53352418"} Dec 06 07:56:25 crc kubenswrapper[4823]: I1206 07:56:25.037951 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fp5ml" event={"ID":"d3082cf1-07cd-4a5a-860f-517063f353b1","Type":"ContainerDied","Data":"d059001fcc0b92788198b445505f45345ba12704348f32e0a28a3c19c3365cb0"} Dec 06 07:56:25 crc kubenswrapper[4823]: I1206 07:56:25.037971 4823 scope.go:117] "RemoveContainer" containerID="4d39168d086122e7e2a33593d6a594944585a5075079713d75b7b19f53352418" Dec 06 07:56:25 crc kubenswrapper[4823]: I1206 07:56:25.038136 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fp5ml" Dec 06 07:56:25 crc kubenswrapper[4823]: I1206 07:56:25.076972 4823 scope.go:117] "RemoveContainer" containerID="2a0a5da3a3e47b5d7fa9845518de0197498d8378f1296f60ba787347e8330dab" Dec 06 07:56:25 crc kubenswrapper[4823]: I1206 07:56:25.099870 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fp5ml"] Dec 06 07:56:25 crc kubenswrapper[4823]: I1206 07:56:25.102346 4823 scope.go:117] "RemoveContainer" containerID="eb8c3cdeab9c20bd88fef73fb184031b683a24a3344252d5a8de852f7ffd4e80" Dec 06 07:56:25 crc kubenswrapper[4823]: I1206 07:56:25.104320 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fp5ml"] Dec 06 07:56:25 crc kubenswrapper[4823]: I1206 07:56:25.154056 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3082cf1-07cd-4a5a-860f-517063f353b1" path="/var/lib/kubelet/pods/d3082cf1-07cd-4a5a-860f-517063f353b1/volumes" Dec 06 07:56:25 crc kubenswrapper[4823]: I1206 07:56:25.164251 4823 scope.go:117] "RemoveContainer" containerID="4d39168d086122e7e2a33593d6a594944585a5075079713d75b7b19f53352418" Dec 06 07:56:25 crc kubenswrapper[4823]: E1206 07:56:25.166010 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d39168d086122e7e2a33593d6a594944585a5075079713d75b7b19f53352418\": container with ID starting with 4d39168d086122e7e2a33593d6a594944585a5075079713d75b7b19f53352418 not found: ID does not exist" containerID="4d39168d086122e7e2a33593d6a594944585a5075079713d75b7b19f53352418" Dec 06 07:56:25 crc kubenswrapper[4823]: I1206 07:56:25.166055 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d39168d086122e7e2a33593d6a594944585a5075079713d75b7b19f53352418"} err="failed to get container status \"4d39168d086122e7e2a33593d6a594944585a5075079713d75b7b19f53352418\": rpc error: code = NotFound desc = could not find container \"4d39168d086122e7e2a33593d6a594944585a5075079713d75b7b19f53352418\": container with ID starting with 4d39168d086122e7e2a33593d6a594944585a5075079713d75b7b19f53352418 not found: ID does not exist" Dec 06 07:56:25 crc kubenswrapper[4823]: I1206 07:56:25.166078 4823 scope.go:117] "RemoveContainer" containerID="2a0a5da3a3e47b5d7fa9845518de0197498d8378f1296f60ba787347e8330dab" Dec 06 07:56:25 crc kubenswrapper[4823]: E1206 07:56:25.166336 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a0a5da3a3e47b5d7fa9845518de0197498d8378f1296f60ba787347e8330dab\": container with ID starting with 2a0a5da3a3e47b5d7fa9845518de0197498d8378f1296f60ba787347e8330dab not found: ID does not exist" containerID="2a0a5da3a3e47b5d7fa9845518de0197498d8378f1296f60ba787347e8330dab" Dec 06 07:56:25 crc kubenswrapper[4823]: I1206 07:56:25.166360 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a0a5da3a3e47b5d7fa9845518de0197498d8378f1296f60ba787347e8330dab"} err="failed to get container status \"2a0a5da3a3e47b5d7fa9845518de0197498d8378f1296f60ba787347e8330dab\": rpc error: code = NotFound desc = could not find container \"2a0a5da3a3e47b5d7fa9845518de0197498d8378f1296f60ba787347e8330dab\": container with ID starting with 2a0a5da3a3e47b5d7fa9845518de0197498d8378f1296f60ba787347e8330dab not found: ID does not exist" Dec 06 07:56:25 crc kubenswrapper[4823]: I1206 07:56:25.166375 4823 scope.go:117] "RemoveContainer" containerID="eb8c3cdeab9c20bd88fef73fb184031b683a24a3344252d5a8de852f7ffd4e80" Dec 06 07:56:25 crc kubenswrapper[4823]: E1206 07:56:25.166678 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb8c3cdeab9c20bd88fef73fb184031b683a24a3344252d5a8de852f7ffd4e80\": container with ID starting with eb8c3cdeab9c20bd88fef73fb184031b683a24a3344252d5a8de852f7ffd4e80 not found: ID does not exist" containerID="eb8c3cdeab9c20bd88fef73fb184031b683a24a3344252d5a8de852f7ffd4e80" Dec 06 07:56:25 crc kubenswrapper[4823]: I1206 07:56:25.166705 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb8c3cdeab9c20bd88fef73fb184031b683a24a3344252d5a8de852f7ffd4e80"} err="failed to get container status \"eb8c3cdeab9c20bd88fef73fb184031b683a24a3344252d5a8de852f7ffd4e80\": rpc error: code = NotFound desc = could not find container \"eb8c3cdeab9c20bd88fef73fb184031b683a24a3344252d5a8de852f7ffd4e80\": container with ID starting with eb8c3cdeab9c20bd88fef73fb184031b683a24a3344252d5a8de852f7ffd4e80 not found: ID does not exist" Dec 06 07:56:28 crc kubenswrapper[4823]: I1206 07:56:28.588551 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kqhg9"] Dec 06 07:56:28 crc kubenswrapper[4823]: E1206 07:56:28.589702 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3082cf1-07cd-4a5a-860f-517063f353b1" containerName="extract-utilities" Dec 06 07:56:28 crc kubenswrapper[4823]: I1206 07:56:28.589721 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3082cf1-07cd-4a5a-860f-517063f353b1" containerName="extract-utilities" Dec 06 07:56:28 crc kubenswrapper[4823]: E1206 07:56:28.589778 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3082cf1-07cd-4a5a-860f-517063f353b1" containerName="registry-server" Dec 06 07:56:28 crc kubenswrapper[4823]: I1206 07:56:28.589788 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3082cf1-07cd-4a5a-860f-517063f353b1" containerName="registry-server" Dec 06 07:56:28 crc kubenswrapper[4823]: E1206 07:56:28.589808 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3082cf1-07cd-4a5a-860f-517063f353b1" containerName="extract-content" Dec 06 07:56:28 crc kubenswrapper[4823]: I1206 07:56:28.589817 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3082cf1-07cd-4a5a-860f-517063f353b1" containerName="extract-content" Dec 06 07:56:28 crc kubenswrapper[4823]: I1206 07:56:28.590085 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3082cf1-07cd-4a5a-860f-517063f353b1" containerName="registry-server" Dec 06 07:56:28 crc kubenswrapper[4823]: I1206 07:56:28.597263 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kqhg9" Dec 06 07:56:28 crc kubenswrapper[4823]: I1206 07:56:28.601617 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kqhg9"] Dec 06 07:56:28 crc kubenswrapper[4823]: I1206 07:56:28.743335 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a-catalog-content\") pod \"redhat-marketplace-kqhg9\" (UID: \"f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a\") " pod="openshift-marketplace/redhat-marketplace-kqhg9" Dec 06 07:56:28 crc kubenswrapper[4823]: I1206 07:56:28.743773 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a-utilities\") pod \"redhat-marketplace-kqhg9\" (UID: \"f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a\") " pod="openshift-marketplace/redhat-marketplace-kqhg9" Dec 06 07:56:28 crc kubenswrapper[4823]: I1206 07:56:28.743919 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxl2x\" (UniqueName: \"kubernetes.io/projected/f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a-kube-api-access-nxl2x\") pod \"redhat-marketplace-kqhg9\" (UID: \"f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a\") " pod="openshift-marketplace/redhat-marketplace-kqhg9" Dec 06 07:56:28 crc kubenswrapper[4823]: I1206 07:56:28.845695 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxl2x\" (UniqueName: \"kubernetes.io/projected/f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a-kube-api-access-nxl2x\") pod \"redhat-marketplace-kqhg9\" (UID: \"f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a\") " pod="openshift-marketplace/redhat-marketplace-kqhg9" Dec 06 07:56:28 crc kubenswrapper[4823]: I1206 07:56:28.846075 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a-catalog-content\") pod \"redhat-marketplace-kqhg9\" (UID: \"f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a\") " pod="openshift-marketplace/redhat-marketplace-kqhg9" Dec 06 07:56:28 crc kubenswrapper[4823]: I1206 07:56:28.846130 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a-utilities\") pod \"redhat-marketplace-kqhg9\" (UID: \"f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a\") " pod="openshift-marketplace/redhat-marketplace-kqhg9" Dec 06 07:56:28 crc kubenswrapper[4823]: I1206 07:56:28.846607 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a-catalog-content\") pod \"redhat-marketplace-kqhg9\" (UID: \"f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a\") " pod="openshift-marketplace/redhat-marketplace-kqhg9" Dec 06 07:56:28 crc kubenswrapper[4823]: I1206 07:56:28.846654 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a-utilities\") pod \"redhat-marketplace-kqhg9\" (UID: \"f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a\") " pod="openshift-marketplace/redhat-marketplace-kqhg9" Dec 06 07:56:28 crc kubenswrapper[4823]: I1206 07:56:28.866958 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxl2x\" (UniqueName: \"kubernetes.io/projected/f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a-kube-api-access-nxl2x\") pod \"redhat-marketplace-kqhg9\" (UID: \"f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a\") " pod="openshift-marketplace/redhat-marketplace-kqhg9" Dec 06 07:56:28 crc kubenswrapper[4823]: I1206 07:56:28.934447 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kqhg9" Dec 06 07:56:29 crc kubenswrapper[4823]: I1206 07:56:29.553675 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kqhg9"] Dec 06 07:56:30 crc kubenswrapper[4823]: I1206 07:56:30.125223 4823 generic.go:334] "Generic (PLEG): container finished" podID="f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a" containerID="44c62d9da37659d997fcf5f677cdea65447535db517508f5ec624d517eb4f1c2" exitCode=0 Dec 06 07:56:30 crc kubenswrapper[4823]: I1206 07:56:30.125276 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqhg9" event={"ID":"f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a","Type":"ContainerDied","Data":"44c62d9da37659d997fcf5f677cdea65447535db517508f5ec624d517eb4f1c2"} Dec 06 07:56:30 crc kubenswrapper[4823]: I1206 07:56:30.125303 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqhg9" event={"ID":"f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a","Type":"ContainerStarted","Data":"82e6359291a6f3c438dc0dc1187cc0d0ffdd11320d4bfd20557f3a1ca15744a9"} Dec 06 07:56:31 crc kubenswrapper[4823]: I1206 07:56:31.136552 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqhg9" event={"ID":"f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a","Type":"ContainerStarted","Data":"2876f0e3fae26194df636dea6015ad93eac748b0b96cbb1e11e162b021b0960d"} Dec 06 07:56:32 crc kubenswrapper[4823]: I1206 07:56:32.148066 4823 generic.go:334] "Generic (PLEG): container finished" podID="f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a" containerID="2876f0e3fae26194df636dea6015ad93eac748b0b96cbb1e11e162b021b0960d" exitCode=0 Dec 06 07:56:32 crc kubenswrapper[4823]: I1206 07:56:32.148281 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqhg9" event={"ID":"f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a","Type":"ContainerDied","Data":"2876f0e3fae26194df636dea6015ad93eac748b0b96cbb1e11e162b021b0960d"} Dec 06 07:56:33 crc kubenswrapper[4823]: I1206 07:56:33.159923 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqhg9" event={"ID":"f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a","Type":"ContainerStarted","Data":"0d508e60957b64e440ca07054475f09699fe4549eb2dc3e1ebabcef867d5aee1"} Dec 06 07:56:33 crc kubenswrapper[4823]: I1206 07:56:33.178201 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kqhg9" podStartSLOduration=2.674317535 podStartE2EDuration="5.178182493s" podCreationTimestamp="2025-12-06 07:56:28 +0000 UTC" firstStartedPulling="2025-12-06 07:56:30.126955748 +0000 UTC m=+5491.412707708" lastFinishedPulling="2025-12-06 07:56:32.630820706 +0000 UTC m=+5493.916572666" observedRunningTime="2025-12-06 07:56:33.177052951 +0000 UTC m=+5494.462804931" watchObservedRunningTime="2025-12-06 07:56:33.178182493 +0000 UTC m=+5494.463934453" Dec 06 07:56:36 crc kubenswrapper[4823]: I1206 07:56:36.051733 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:56:36 crc kubenswrapper[4823]: I1206 07:56:36.052287 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:56:38 crc kubenswrapper[4823]: I1206 07:56:38.934739 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kqhg9" Dec 06 07:56:38 crc kubenswrapper[4823]: I1206 07:56:38.935230 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kqhg9" Dec 06 07:56:38 crc kubenswrapper[4823]: I1206 07:56:38.982481 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kqhg9" Dec 06 07:56:39 crc kubenswrapper[4823]: I1206 07:56:39.280132 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kqhg9" Dec 06 07:56:39 crc kubenswrapper[4823]: I1206 07:56:39.366398 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kqhg9"] Dec 06 07:56:41 crc kubenswrapper[4823]: I1206 07:56:41.238567 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kqhg9" podUID="f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a" containerName="registry-server" containerID="cri-o://0d508e60957b64e440ca07054475f09699fe4549eb2dc3e1ebabcef867d5aee1" gracePeriod=2 Dec 06 07:56:41 crc kubenswrapper[4823]: I1206 07:56:41.804142 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kqhg9" Dec 06 07:56:41 crc kubenswrapper[4823]: I1206 07:56:41.925745 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a-catalog-content\") pod \"f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a\" (UID: \"f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a\") " Dec 06 07:56:41 crc kubenswrapper[4823]: I1206 07:56:41.926048 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxl2x\" (UniqueName: \"kubernetes.io/projected/f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a-kube-api-access-nxl2x\") pod \"f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a\" (UID: \"f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a\") " Dec 06 07:56:41 crc kubenswrapper[4823]: I1206 07:56:41.926174 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a-utilities\") pod \"f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a\" (UID: \"f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a\") " Dec 06 07:56:41 crc kubenswrapper[4823]: I1206 07:56:41.927103 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a-utilities" (OuterVolumeSpecName: "utilities") pod "f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a" (UID: "f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:56:41 crc kubenswrapper[4823]: I1206 07:56:41.936575 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a-kube-api-access-nxl2x" (OuterVolumeSpecName: "kube-api-access-nxl2x") pod "f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a" (UID: "f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a"). InnerVolumeSpecName "kube-api-access-nxl2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:56:41 crc kubenswrapper[4823]: I1206 07:56:41.946343 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a" (UID: "f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:56:42 crc kubenswrapper[4823]: I1206 07:56:42.029208 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:56:42 crc kubenswrapper[4823]: I1206 07:56:42.029244 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:56:42 crc kubenswrapper[4823]: I1206 07:56:42.029256 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxl2x\" (UniqueName: \"kubernetes.io/projected/f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a-kube-api-access-nxl2x\") on node \"crc\" DevicePath \"\"" Dec 06 07:56:42 crc kubenswrapper[4823]: I1206 07:56:42.254461 4823 generic.go:334] "Generic (PLEG): container finished" podID="f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a" containerID="0d508e60957b64e440ca07054475f09699fe4549eb2dc3e1ebabcef867d5aee1" exitCode=0 Dec 06 07:56:42 crc kubenswrapper[4823]: I1206 07:56:42.254549 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kqhg9" Dec 06 07:56:42 crc kubenswrapper[4823]: I1206 07:56:42.254557 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqhg9" event={"ID":"f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a","Type":"ContainerDied","Data":"0d508e60957b64e440ca07054475f09699fe4549eb2dc3e1ebabcef867d5aee1"} Dec 06 07:56:42 crc kubenswrapper[4823]: I1206 07:56:42.255028 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqhg9" event={"ID":"f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a","Type":"ContainerDied","Data":"82e6359291a6f3c438dc0dc1187cc0d0ffdd11320d4bfd20557f3a1ca15744a9"} Dec 06 07:56:42 crc kubenswrapper[4823]: I1206 07:56:42.255086 4823 scope.go:117] "RemoveContainer" containerID="0d508e60957b64e440ca07054475f09699fe4549eb2dc3e1ebabcef867d5aee1" Dec 06 07:56:42 crc kubenswrapper[4823]: I1206 07:56:42.284713 4823 scope.go:117] "RemoveContainer" containerID="2876f0e3fae26194df636dea6015ad93eac748b0b96cbb1e11e162b021b0960d" Dec 06 07:56:42 crc kubenswrapper[4823]: I1206 07:56:42.289704 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kqhg9"] Dec 06 07:56:42 crc kubenswrapper[4823]: I1206 07:56:42.299112 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kqhg9"] Dec 06 07:56:42 crc kubenswrapper[4823]: I1206 07:56:42.327455 4823 scope.go:117] "RemoveContainer" containerID="44c62d9da37659d997fcf5f677cdea65447535db517508f5ec624d517eb4f1c2" Dec 06 07:56:42 crc kubenswrapper[4823]: I1206 07:56:42.373014 4823 scope.go:117] "RemoveContainer" containerID="0d508e60957b64e440ca07054475f09699fe4549eb2dc3e1ebabcef867d5aee1" Dec 06 07:56:42 crc kubenswrapper[4823]: E1206 07:56:42.373575 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d508e60957b64e440ca07054475f09699fe4549eb2dc3e1ebabcef867d5aee1\": container with ID starting with 0d508e60957b64e440ca07054475f09699fe4549eb2dc3e1ebabcef867d5aee1 not found: ID does not exist" containerID="0d508e60957b64e440ca07054475f09699fe4549eb2dc3e1ebabcef867d5aee1" Dec 06 07:56:42 crc kubenswrapper[4823]: I1206 07:56:42.373630 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d508e60957b64e440ca07054475f09699fe4549eb2dc3e1ebabcef867d5aee1"} err="failed to get container status \"0d508e60957b64e440ca07054475f09699fe4549eb2dc3e1ebabcef867d5aee1\": rpc error: code = NotFound desc = could not find container \"0d508e60957b64e440ca07054475f09699fe4549eb2dc3e1ebabcef867d5aee1\": container with ID starting with 0d508e60957b64e440ca07054475f09699fe4549eb2dc3e1ebabcef867d5aee1 not found: ID does not exist" Dec 06 07:56:42 crc kubenswrapper[4823]: I1206 07:56:42.373675 4823 scope.go:117] "RemoveContainer" containerID="2876f0e3fae26194df636dea6015ad93eac748b0b96cbb1e11e162b021b0960d" Dec 06 07:56:42 crc kubenswrapper[4823]: E1206 07:56:42.375274 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2876f0e3fae26194df636dea6015ad93eac748b0b96cbb1e11e162b021b0960d\": container with ID starting with 2876f0e3fae26194df636dea6015ad93eac748b0b96cbb1e11e162b021b0960d not found: ID does not exist" containerID="2876f0e3fae26194df636dea6015ad93eac748b0b96cbb1e11e162b021b0960d" Dec 06 07:56:42 crc kubenswrapper[4823]: I1206 07:56:42.375316 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2876f0e3fae26194df636dea6015ad93eac748b0b96cbb1e11e162b021b0960d"} err="failed to get container status \"2876f0e3fae26194df636dea6015ad93eac748b0b96cbb1e11e162b021b0960d\": rpc error: code = NotFound desc = could not find container \"2876f0e3fae26194df636dea6015ad93eac748b0b96cbb1e11e162b021b0960d\": container with ID starting with 2876f0e3fae26194df636dea6015ad93eac748b0b96cbb1e11e162b021b0960d not found: ID does not exist" Dec 06 07:56:42 crc kubenswrapper[4823]: I1206 07:56:42.375341 4823 scope.go:117] "RemoveContainer" containerID="44c62d9da37659d997fcf5f677cdea65447535db517508f5ec624d517eb4f1c2" Dec 06 07:56:42 crc kubenswrapper[4823]: E1206 07:56:42.375696 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44c62d9da37659d997fcf5f677cdea65447535db517508f5ec624d517eb4f1c2\": container with ID starting with 44c62d9da37659d997fcf5f677cdea65447535db517508f5ec624d517eb4f1c2 not found: ID does not exist" containerID="44c62d9da37659d997fcf5f677cdea65447535db517508f5ec624d517eb4f1c2" Dec 06 07:56:42 crc kubenswrapper[4823]: I1206 07:56:42.375728 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44c62d9da37659d997fcf5f677cdea65447535db517508f5ec624d517eb4f1c2"} err="failed to get container status \"44c62d9da37659d997fcf5f677cdea65447535db517508f5ec624d517eb4f1c2\": rpc error: code = NotFound desc = could not find container \"44c62d9da37659d997fcf5f677cdea65447535db517508f5ec624d517eb4f1c2\": container with ID starting with 44c62d9da37659d997fcf5f677cdea65447535db517508f5ec624d517eb4f1c2 not found: ID does not exist" Dec 06 07:56:43 crc kubenswrapper[4823]: I1206 07:56:43.153940 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a" path="/var/lib/kubelet/pods/f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a/volumes" Dec 06 07:56:53 crc kubenswrapper[4823]: I1206 07:56:53.653933 4823 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-969f9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.58:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 07:56:53 crc kubenswrapper[4823]: I1206 07:56:53.653976 4823 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-969f9 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.58:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 07:56:53 crc kubenswrapper[4823]: I1206 07:56:53.654493 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-969f9" podUID="c529f398-1c3e-4a7c-a46f-d57d2f588b9c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.58:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 07:56:53 crc kubenswrapper[4823]: I1206 07:56:53.654557 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-969f9" podUID="c529f398-1c3e-4a7c-a46f-d57d2f588b9c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.58:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 07:57:06 crc kubenswrapper[4823]: I1206 07:57:06.051868 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:57:06 crc kubenswrapper[4823]: I1206 07:57:06.053500 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:57:36 crc kubenswrapper[4823]: I1206 07:57:36.052083 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:57:36 crc kubenswrapper[4823]: I1206 07:57:36.052624 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:57:36 crc kubenswrapper[4823]: I1206 07:57:36.052699 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 07:57:36 crc kubenswrapper[4823]: I1206 07:57:36.053588 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d09e5fba844f65ee736c5dd962cfbb99aa4b53f2457eee3f1cdf852aaf76a107"} pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:57:36 crc kubenswrapper[4823]: I1206 07:57:36.053689 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" containerID="cri-o://d09e5fba844f65ee736c5dd962cfbb99aa4b53f2457eee3f1cdf852aaf76a107" gracePeriod=600 Dec 06 07:57:36 crc kubenswrapper[4823]: I1206 07:57:36.792701 4823 generic.go:334] "Generic (PLEG): container finished" podID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerID="d09e5fba844f65ee736c5dd962cfbb99aa4b53f2457eee3f1cdf852aaf76a107" exitCode=0 Dec 06 07:57:36 crc kubenswrapper[4823]: I1206 07:57:36.792791 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerDied","Data":"d09e5fba844f65ee736c5dd962cfbb99aa4b53f2457eee3f1cdf852aaf76a107"} Dec 06 07:57:36 crc kubenswrapper[4823]: I1206 07:57:36.793494 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerStarted","Data":"8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f"} Dec 06 07:57:36 crc kubenswrapper[4823]: I1206 07:57:36.793541 4823 scope.go:117] "RemoveContainer" containerID="6b90b65544df40e05ad8e8a9d525d4e59935336ddd49b321b32a5e5b1e03c3a8" Dec 06 07:59:36 crc kubenswrapper[4823]: I1206 07:59:36.051773 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:59:36 crc kubenswrapper[4823]: I1206 07:59:36.052392 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:00:00 crc kubenswrapper[4823]: I1206 08:00:00.154220 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416800-dhrn6"] Dec 06 08:00:00 crc kubenswrapper[4823]: E1206 08:00:00.155332 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a" containerName="extract-content" Dec 06 08:00:00 crc kubenswrapper[4823]: I1206 08:00:00.155353 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a" containerName="extract-content" Dec 06 08:00:00 crc kubenswrapper[4823]: E1206 08:00:00.155377 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a" containerName="extract-utilities" Dec 06 08:00:00 crc kubenswrapper[4823]: I1206 08:00:00.155386 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a" containerName="extract-utilities" Dec 06 08:00:00 crc kubenswrapper[4823]: E1206 08:00:00.155400 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a" containerName="registry-server" Dec 06 08:00:00 crc kubenswrapper[4823]: I1206 08:00:00.155408 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a" containerName="registry-server" Dec 06 08:00:00 crc kubenswrapper[4823]: I1206 08:00:00.155743 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f84391ab-ee76-41f9-8bb0-ed39b4fe9e6a" containerName="registry-server" Dec 06 08:00:00 crc kubenswrapper[4823]: I1206 08:00:00.156770 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-dhrn6" Dec 06 08:00:00 crc kubenswrapper[4823]: I1206 08:00:00.161020 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 08:00:00 crc kubenswrapper[4823]: I1206 08:00:00.162494 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 08:00:00 crc kubenswrapper[4823]: I1206 08:00:00.165234 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416800-dhrn6"] Dec 06 08:00:00 crc kubenswrapper[4823]: I1206 08:00:00.314533 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90d5d330-a715-47b6-8749-ebc258dd35fc-config-volume\") pod \"collect-profiles-29416800-dhrn6\" (UID: \"90d5d330-a715-47b6-8749-ebc258dd35fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-dhrn6" Dec 06 08:00:00 crc kubenswrapper[4823]: I1206 08:00:00.315082 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90d5d330-a715-47b6-8749-ebc258dd35fc-secret-volume\") pod \"collect-profiles-29416800-dhrn6\" (UID: \"90d5d330-a715-47b6-8749-ebc258dd35fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-dhrn6" Dec 06 08:00:00 crc kubenswrapper[4823]: I1206 08:00:00.315225 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc8jv\" (UniqueName: \"kubernetes.io/projected/90d5d330-a715-47b6-8749-ebc258dd35fc-kube-api-access-xc8jv\") pod \"collect-profiles-29416800-dhrn6\" (UID: \"90d5d330-a715-47b6-8749-ebc258dd35fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-dhrn6" Dec 06 08:00:00 crc kubenswrapper[4823]: I1206 08:00:00.417482 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90d5d330-a715-47b6-8749-ebc258dd35fc-secret-volume\") pod \"collect-profiles-29416800-dhrn6\" (UID: \"90d5d330-a715-47b6-8749-ebc258dd35fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-dhrn6" Dec 06 08:00:00 crc kubenswrapper[4823]: I1206 08:00:00.417835 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc8jv\" (UniqueName: \"kubernetes.io/projected/90d5d330-a715-47b6-8749-ebc258dd35fc-kube-api-access-xc8jv\") pod \"collect-profiles-29416800-dhrn6\" (UID: \"90d5d330-a715-47b6-8749-ebc258dd35fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-dhrn6" Dec 06 08:00:00 crc kubenswrapper[4823]: I1206 08:00:00.417928 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90d5d330-a715-47b6-8749-ebc258dd35fc-config-volume\") pod \"collect-profiles-29416800-dhrn6\" (UID: \"90d5d330-a715-47b6-8749-ebc258dd35fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-dhrn6" Dec 06 08:00:00 crc kubenswrapper[4823]: I1206 08:00:00.419265 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90d5d330-a715-47b6-8749-ebc258dd35fc-config-volume\") pod \"collect-profiles-29416800-dhrn6\" (UID: \"90d5d330-a715-47b6-8749-ebc258dd35fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-dhrn6" Dec 06 08:00:00 crc kubenswrapper[4823]: I1206 08:00:00.423552 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90d5d330-a715-47b6-8749-ebc258dd35fc-secret-volume\") pod \"collect-profiles-29416800-dhrn6\" (UID: \"90d5d330-a715-47b6-8749-ebc258dd35fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-dhrn6" Dec 06 08:00:00 crc kubenswrapper[4823]: I1206 08:00:00.437908 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc8jv\" (UniqueName: \"kubernetes.io/projected/90d5d330-a715-47b6-8749-ebc258dd35fc-kube-api-access-xc8jv\") pod \"collect-profiles-29416800-dhrn6\" (UID: \"90d5d330-a715-47b6-8749-ebc258dd35fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-dhrn6" Dec 06 08:00:00 crc kubenswrapper[4823]: I1206 08:00:00.492387 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-dhrn6" Dec 06 08:00:00 crc kubenswrapper[4823]: I1206 08:00:00.695697 4823 generic.go:334] "Generic (PLEG): container finished" podID="bc939bd4-7c0b-4783-a90c-cb9791a86c9f" containerID="9fde670becde100a33b05eb62ecbed10be4a6dbd7b2a06c9ea6e5482580a148d" exitCode=1 Dec 06 08:00:00 crc kubenswrapper[4823]: I1206 08:00:00.695857 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"bc939bd4-7c0b-4783-a90c-cb9791a86c9f","Type":"ContainerDied","Data":"9fde670becde100a33b05eb62ecbed10be4a6dbd7b2a06c9ea6e5482580a148d"} Dec 06 08:00:00 crc kubenswrapper[4823]: I1206 08:00:00.951632 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416800-dhrn6"] Dec 06 08:00:01 crc kubenswrapper[4823]: I1206 08:00:01.709323 4823 generic.go:334] "Generic (PLEG): container finished" podID="90d5d330-a715-47b6-8749-ebc258dd35fc" containerID="97e8230b9d44dc9b6267440d77f0101c6b6436f1760ec1b4f794ddc6b11b094c" exitCode=0 Dec 06 08:00:01 crc kubenswrapper[4823]: I1206 08:00:01.709412 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-dhrn6" event={"ID":"90d5d330-a715-47b6-8749-ebc258dd35fc","Type":"ContainerDied","Data":"97e8230b9d44dc9b6267440d77f0101c6b6436f1760ec1b4f794ddc6b11b094c"} Dec 06 08:00:01 crc kubenswrapper[4823]: I1206 08:00:01.711066 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-dhrn6" event={"ID":"90d5d330-a715-47b6-8749-ebc258dd35fc","Type":"ContainerStarted","Data":"c39c1c23284e845599730cd5c4af6ff9db708de898840c45e0f12966b1bd3593"} Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.084638 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.259534 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-openstack-config-secret\") pod \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.259621 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-test-operator-ephemeral-workdir\") pod \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.259713 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mm7t\" (UniqueName: \"kubernetes.io/projected/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-kube-api-access-5mm7t\") pod \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.259752 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-test-operator-ephemeral-temporary\") pod \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.259800 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-ca-certs\") pod \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.259839 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-config-data\") pod \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.259865 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.259901 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-openstack-config\") pod \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.260031 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-ssh-key\") pod \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\" (UID: \"bc939bd4-7c0b-4783-a90c-cb9791a86c9f\") " Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.260483 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "bc939bd4-7c0b-4783-a90c-cb9791a86c9f" (UID: "bc939bd4-7c0b-4783-a90c-cb9791a86c9f"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.261120 4823 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.261201 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-config-data" (OuterVolumeSpecName: "config-data") pod "bc939bd4-7c0b-4783-a90c-cb9791a86c9f" (UID: "bc939bd4-7c0b-4783-a90c-cb9791a86c9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.267170 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "bc939bd4-7c0b-4783-a90c-cb9791a86c9f" (UID: "bc939bd4-7c0b-4783-a90c-cb9791a86c9f"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.269774 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "test-operator-logs") pod "bc939bd4-7c0b-4783-a90c-cb9791a86c9f" (UID: "bc939bd4-7c0b-4783-a90c-cb9791a86c9f"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.270585 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-kube-api-access-5mm7t" (OuterVolumeSpecName: "kube-api-access-5mm7t") pod "bc939bd4-7c0b-4783-a90c-cb9791a86c9f" (UID: "bc939bd4-7c0b-4783-a90c-cb9791a86c9f"). InnerVolumeSpecName "kube-api-access-5mm7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.296317 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "bc939bd4-7c0b-4783-a90c-cb9791a86c9f" (UID: "bc939bd4-7c0b-4783-a90c-cb9791a86c9f"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.301850 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "bc939bd4-7c0b-4783-a90c-cb9791a86c9f" (UID: "bc939bd4-7c0b-4783-a90c-cb9791a86c9f"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.307719 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bc939bd4-7c0b-4783-a90c-cb9791a86c9f" (UID: "bc939bd4-7c0b-4783-a90c-cb9791a86c9f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.322974 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "bc939bd4-7c0b-4783-a90c-cb9791a86c9f" (UID: "bc939bd4-7c0b-4783-a90c-cb9791a86c9f"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.363441 4823 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.363501 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.363547 4823 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.363561 4823 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.363577 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.363590 4823 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.363619 4823 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.363632 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mm7t\" (UniqueName: \"kubernetes.io/projected/bc939bd4-7c0b-4783-a90c-cb9791a86c9f-kube-api-access-5mm7t\") on node \"crc\" DevicePath \"\"" Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.391884 4823 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.465399 4823 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.721446 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.721450 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"bc939bd4-7c0b-4783-a90c-cb9791a86c9f","Type":"ContainerDied","Data":"477fc55e1f01b0d033be1e6efbdc7fce259b42688790ae984dfbe4f4e94cfdca"} Dec 06 08:00:02 crc kubenswrapper[4823]: I1206 08:00:02.721502 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="477fc55e1f01b0d033be1e6efbdc7fce259b42688790ae984dfbe4f4e94cfdca" Dec 06 08:00:03 crc kubenswrapper[4823]: I1206 08:00:03.100121 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-dhrn6" Dec 06 08:00:03 crc kubenswrapper[4823]: I1206 08:00:03.179508 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90d5d330-a715-47b6-8749-ebc258dd35fc-secret-volume\") pod \"90d5d330-a715-47b6-8749-ebc258dd35fc\" (UID: \"90d5d330-a715-47b6-8749-ebc258dd35fc\") " Dec 06 08:00:03 crc kubenswrapper[4823]: I1206 08:00:03.179717 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc8jv\" (UniqueName: \"kubernetes.io/projected/90d5d330-a715-47b6-8749-ebc258dd35fc-kube-api-access-xc8jv\") pod \"90d5d330-a715-47b6-8749-ebc258dd35fc\" (UID: \"90d5d330-a715-47b6-8749-ebc258dd35fc\") " Dec 06 08:00:03 crc kubenswrapper[4823]: I1206 08:00:03.179755 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90d5d330-a715-47b6-8749-ebc258dd35fc-config-volume\") pod \"90d5d330-a715-47b6-8749-ebc258dd35fc\" (UID: \"90d5d330-a715-47b6-8749-ebc258dd35fc\") " Dec 06 08:00:03 crc kubenswrapper[4823]: I1206 08:00:03.181099 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90d5d330-a715-47b6-8749-ebc258dd35fc-config-volume" (OuterVolumeSpecName: "config-volume") pod "90d5d330-a715-47b6-8749-ebc258dd35fc" (UID: "90d5d330-a715-47b6-8749-ebc258dd35fc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 08:00:03 crc kubenswrapper[4823]: I1206 08:00:03.186260 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90d5d330-a715-47b6-8749-ebc258dd35fc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "90d5d330-a715-47b6-8749-ebc258dd35fc" (UID: "90d5d330-a715-47b6-8749-ebc258dd35fc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 08:00:03 crc kubenswrapper[4823]: I1206 08:00:03.186489 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90d5d330-a715-47b6-8749-ebc258dd35fc-kube-api-access-xc8jv" (OuterVolumeSpecName: "kube-api-access-xc8jv") pod "90d5d330-a715-47b6-8749-ebc258dd35fc" (UID: "90d5d330-a715-47b6-8749-ebc258dd35fc"). InnerVolumeSpecName "kube-api-access-xc8jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:00:03 crc kubenswrapper[4823]: I1206 08:00:03.282518 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90d5d330-a715-47b6-8749-ebc258dd35fc-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 08:00:03 crc kubenswrapper[4823]: I1206 08:00:03.282551 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc8jv\" (UniqueName: \"kubernetes.io/projected/90d5d330-a715-47b6-8749-ebc258dd35fc-kube-api-access-xc8jv\") on node \"crc\" DevicePath \"\"" Dec 06 08:00:03 crc kubenswrapper[4823]: I1206 08:00:03.282560 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90d5d330-a715-47b6-8749-ebc258dd35fc-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 08:00:03 crc kubenswrapper[4823]: I1206 08:00:03.733352 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-dhrn6" event={"ID":"90d5d330-a715-47b6-8749-ebc258dd35fc","Type":"ContainerDied","Data":"c39c1c23284e845599730cd5c4af6ff9db708de898840c45e0f12966b1bd3593"} Dec 06 08:00:03 crc kubenswrapper[4823]: I1206 08:00:03.733396 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416800-dhrn6" Dec 06 08:00:03 crc kubenswrapper[4823]: I1206 08:00:03.733402 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c39c1c23284e845599730cd5c4af6ff9db708de898840c45e0f12966b1bd3593" Dec 06 08:00:04 crc kubenswrapper[4823]: I1206 08:00:04.178251 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416755-r6fk7"] Dec 06 08:00:04 crc kubenswrapper[4823]: I1206 08:00:04.189899 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416755-r6fk7"] Dec 06 08:00:04 crc kubenswrapper[4823]: I1206 08:00:04.952807 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 06 08:00:04 crc kubenswrapper[4823]: E1206 08:00:04.953366 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d5d330-a715-47b6-8749-ebc258dd35fc" containerName="collect-profiles" Dec 06 08:00:04 crc kubenswrapper[4823]: I1206 08:00:04.953383 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d5d330-a715-47b6-8749-ebc258dd35fc" containerName="collect-profiles" Dec 06 08:00:04 crc kubenswrapper[4823]: E1206 08:00:04.953401 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc939bd4-7c0b-4783-a90c-cb9791a86c9f" containerName="tempest-tests-tempest-tests-runner" Dec 06 08:00:04 crc kubenswrapper[4823]: I1206 08:00:04.953409 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc939bd4-7c0b-4783-a90c-cb9791a86c9f" containerName="tempest-tests-tempest-tests-runner" Dec 06 08:00:04 crc kubenswrapper[4823]: I1206 08:00:04.953655 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="90d5d330-a715-47b6-8749-ebc258dd35fc" containerName="collect-profiles" Dec 06 08:00:04 crc kubenswrapper[4823]: I1206 08:00:04.953693 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc939bd4-7c0b-4783-a90c-cb9791a86c9f" containerName="tempest-tests-tempest-tests-runner" Dec 06 08:00:04 crc kubenswrapper[4823]: I1206 08:00:04.954519 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 08:00:04 crc kubenswrapper[4823]: I1206 08:00:04.958153 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-brd8d" Dec 06 08:00:04 crc kubenswrapper[4823]: I1206 08:00:04.965070 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 06 08:00:05 crc kubenswrapper[4823]: I1206 08:00:05.016582 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"57365a20-b2a5-4f40-be8c-5f70d739cfd3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 08:00:05 crc kubenswrapper[4823]: I1206 08:00:05.016743 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghkrr\" (UniqueName: \"kubernetes.io/projected/57365a20-b2a5-4f40-be8c-5f70d739cfd3-kube-api-access-ghkrr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"57365a20-b2a5-4f40-be8c-5f70d739cfd3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 08:00:05 crc kubenswrapper[4823]: I1206 08:00:05.118710 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"57365a20-b2a5-4f40-be8c-5f70d739cfd3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 08:00:05 crc kubenswrapper[4823]: I1206 08:00:05.118904 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghkrr\" (UniqueName: \"kubernetes.io/projected/57365a20-b2a5-4f40-be8c-5f70d739cfd3-kube-api-access-ghkrr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"57365a20-b2a5-4f40-be8c-5f70d739cfd3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 08:00:05 crc kubenswrapper[4823]: I1206 08:00:05.119328 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"57365a20-b2a5-4f40-be8c-5f70d739cfd3\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 08:00:05 crc kubenswrapper[4823]: I1206 08:00:05.148981 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghkrr\" (UniqueName: \"kubernetes.io/projected/57365a20-b2a5-4f40-be8c-5f70d739cfd3-kube-api-access-ghkrr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"57365a20-b2a5-4f40-be8c-5f70d739cfd3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 08:00:05 crc kubenswrapper[4823]: I1206 08:00:05.151834 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"57365a20-b2a5-4f40-be8c-5f70d739cfd3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 08:00:05 crc kubenswrapper[4823]: I1206 08:00:05.155179 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d0238c3-a925-4c87-b5dd-b531b95f6019" path="/var/lib/kubelet/pods/7d0238c3-a925-4c87-b5dd-b531b95f6019/volumes" Dec 06 08:00:05 crc kubenswrapper[4823]: I1206 08:00:05.279797 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 08:00:05 crc kubenswrapper[4823]: I1206 08:00:05.730627 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 06 08:00:05 crc kubenswrapper[4823]: I1206 08:00:05.732975 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 08:00:05 crc kubenswrapper[4823]: I1206 08:00:05.754423 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"57365a20-b2a5-4f40-be8c-5f70d739cfd3","Type":"ContainerStarted","Data":"fe4d1a96450256725afc612287dc8ef28ad56cbff96bd3b78a7b29931136d4a2"} Dec 06 08:00:06 crc kubenswrapper[4823]: I1206 08:00:06.052000 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:00:06 crc kubenswrapper[4823]: I1206 08:00:06.052058 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:00:07 crc kubenswrapper[4823]: I1206 08:00:07.774075 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"57365a20-b2a5-4f40-be8c-5f70d739cfd3","Type":"ContainerStarted","Data":"b0bbd038d5873dfac0df428a823b121a0d65a52eecfc3174d361384ae2c650df"} Dec 06 08:00:07 crc kubenswrapper[4823]: I1206 08:00:07.794688 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.856028593 podStartE2EDuration="3.794644504s" podCreationTimestamp="2025-12-06 08:00:04 +0000 UTC" firstStartedPulling="2025-12-06 08:00:05.73248127 +0000 UTC m=+5707.018233230" lastFinishedPulling="2025-12-06 08:00:06.671097181 +0000 UTC m=+5707.956849141" observedRunningTime="2025-12-06 08:00:07.788044315 +0000 UTC m=+5709.073796275" watchObservedRunningTime="2025-12-06 08:00:07.794644504 +0000 UTC m=+5709.080396474" Dec 06 08:00:36 crc kubenswrapper[4823]: I1206 08:00:36.052137 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:00:36 crc kubenswrapper[4823]: I1206 08:00:36.052709 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:00:36 crc kubenswrapper[4823]: I1206 08:00:36.052762 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 08:00:36 crc kubenswrapper[4823]: I1206 08:00:36.053558 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f"} pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 08:00:36 crc kubenswrapper[4823]: I1206 08:00:36.053610 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" containerID="cri-o://8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" gracePeriod=600 Dec 06 08:00:36 crc kubenswrapper[4823]: E1206 08:00:36.219317 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:00:37 crc kubenswrapper[4823]: I1206 08:00:37.045305 4823 generic.go:334] "Generic (PLEG): container finished" podID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerID="8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" exitCode=0 Dec 06 08:00:37 crc kubenswrapper[4823]: I1206 08:00:37.045383 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerDied","Data":"8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f"} Dec 06 08:00:37 crc kubenswrapper[4823]: I1206 08:00:37.045774 4823 scope.go:117] "RemoveContainer" containerID="d09e5fba844f65ee736c5dd962cfbb99aa4b53f2457eee3f1cdf852aaf76a107" Dec 06 08:00:37 crc kubenswrapper[4823]: I1206 08:00:37.046450 4823 scope.go:117] "RemoveContainer" containerID="8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" Dec 06 08:00:37 crc kubenswrapper[4823]: E1206 08:00:37.046838 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:00:46 crc kubenswrapper[4823]: I1206 08:00:46.490703 4823 scope.go:117] "RemoveContainer" containerID="f11d87db2457e569934361f5d4f59e2cd613157960609cafe271267c7b6ca762" Dec 06 08:00:51 crc kubenswrapper[4823]: I1206 08:00:51.141284 4823 scope.go:117] "RemoveContainer" containerID="8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" Dec 06 08:00:51 crc kubenswrapper[4823]: E1206 08:00:51.142197 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:00:56 crc kubenswrapper[4823]: I1206 08:00:56.207027 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jmqgf/must-gather-zwlf6"] Dec 06 08:00:56 crc kubenswrapper[4823]: I1206 08:00:56.211310 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jmqgf/must-gather-zwlf6" Dec 06 08:00:56 crc kubenswrapper[4823]: I1206 08:00:56.213420 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jmqgf"/"kube-root-ca.crt" Dec 06 08:00:56 crc kubenswrapper[4823]: I1206 08:00:56.213507 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jmqgf"/"openshift-service-ca.crt" Dec 06 08:00:56 crc kubenswrapper[4823]: I1206 08:00:56.215522 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-jmqgf"/"default-dockercfg-8kc97" Dec 06 08:00:56 crc kubenswrapper[4823]: I1206 08:00:56.225259 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jmqgf/must-gather-zwlf6"] Dec 06 08:00:56 crc kubenswrapper[4823]: I1206 08:00:56.271778 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/68e86d3f-c3d7-451b-9c68-d318eb241e87-must-gather-output\") pod \"must-gather-zwlf6\" (UID: \"68e86d3f-c3d7-451b-9c68-d318eb241e87\") " pod="openshift-must-gather-jmqgf/must-gather-zwlf6" Dec 06 08:00:56 crc kubenswrapper[4823]: I1206 08:00:56.272011 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4rr9\" (UniqueName: \"kubernetes.io/projected/68e86d3f-c3d7-451b-9c68-d318eb241e87-kube-api-access-l4rr9\") pod \"must-gather-zwlf6\" (UID: \"68e86d3f-c3d7-451b-9c68-d318eb241e87\") " pod="openshift-must-gather-jmqgf/must-gather-zwlf6" Dec 06 08:00:56 crc kubenswrapper[4823]: I1206 08:00:56.374671 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4rr9\" (UniqueName: \"kubernetes.io/projected/68e86d3f-c3d7-451b-9c68-d318eb241e87-kube-api-access-l4rr9\") pod \"must-gather-zwlf6\" (UID: \"68e86d3f-c3d7-451b-9c68-d318eb241e87\") " pod="openshift-must-gather-jmqgf/must-gather-zwlf6" Dec 06 08:00:56 crc kubenswrapper[4823]: I1206 08:00:56.374896 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/68e86d3f-c3d7-451b-9c68-d318eb241e87-must-gather-output\") pod \"must-gather-zwlf6\" (UID: \"68e86d3f-c3d7-451b-9c68-d318eb241e87\") " pod="openshift-must-gather-jmqgf/must-gather-zwlf6" Dec 06 08:00:56 crc kubenswrapper[4823]: I1206 08:00:56.375353 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/68e86d3f-c3d7-451b-9c68-d318eb241e87-must-gather-output\") pod \"must-gather-zwlf6\" (UID: \"68e86d3f-c3d7-451b-9c68-d318eb241e87\") " pod="openshift-must-gather-jmqgf/must-gather-zwlf6" Dec 06 08:00:56 crc kubenswrapper[4823]: I1206 08:00:56.473242 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4rr9\" (UniqueName: \"kubernetes.io/projected/68e86d3f-c3d7-451b-9c68-d318eb241e87-kube-api-access-l4rr9\") pod \"must-gather-zwlf6\" (UID: \"68e86d3f-c3d7-451b-9c68-d318eb241e87\") " pod="openshift-must-gather-jmqgf/must-gather-zwlf6" Dec 06 08:00:56 crc kubenswrapper[4823]: I1206 08:00:56.539115 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jmqgf/must-gather-zwlf6" Dec 06 08:00:57 crc kubenswrapper[4823]: I1206 08:00:57.053279 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jmqgf/must-gather-zwlf6"] Dec 06 08:00:57 crc kubenswrapper[4823]: I1206 08:00:57.231193 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jmqgf/must-gather-zwlf6" event={"ID":"68e86d3f-c3d7-451b-9c68-d318eb241e87","Type":"ContainerStarted","Data":"60d30cd3e265a752a5af2c198b98d06b7544f2f9703f4e0af080b4658f35599b"} Dec 06 08:01:00 crc kubenswrapper[4823]: I1206 08:01:00.164744 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29416801-xnmtd"] Dec 06 08:01:00 crc kubenswrapper[4823]: I1206 08:01:00.166955 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416801-xnmtd" Dec 06 08:01:00 crc kubenswrapper[4823]: I1206 08:01:00.179072 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29416801-xnmtd"] Dec 06 08:01:00 crc kubenswrapper[4823]: I1206 08:01:00.270807 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn6ft\" (UniqueName: \"kubernetes.io/projected/1137268d-39c5-4e9e-ba13-73c618ea210e-kube-api-access-kn6ft\") pod \"keystone-cron-29416801-xnmtd\" (UID: \"1137268d-39c5-4e9e-ba13-73c618ea210e\") " pod="openstack/keystone-cron-29416801-xnmtd" Dec 06 08:01:00 crc kubenswrapper[4823]: I1206 08:01:00.270881 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1137268d-39c5-4e9e-ba13-73c618ea210e-config-data\") pod \"keystone-cron-29416801-xnmtd\" (UID: \"1137268d-39c5-4e9e-ba13-73c618ea210e\") " pod="openstack/keystone-cron-29416801-xnmtd" Dec 06 08:01:00 crc kubenswrapper[4823]: I1206 08:01:00.270994 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1137268d-39c5-4e9e-ba13-73c618ea210e-fernet-keys\") pod \"keystone-cron-29416801-xnmtd\" (UID: \"1137268d-39c5-4e9e-ba13-73c618ea210e\") " pod="openstack/keystone-cron-29416801-xnmtd" Dec 06 08:01:00 crc kubenswrapper[4823]: I1206 08:01:00.271373 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1137268d-39c5-4e9e-ba13-73c618ea210e-combined-ca-bundle\") pod \"keystone-cron-29416801-xnmtd\" (UID: \"1137268d-39c5-4e9e-ba13-73c618ea210e\") " pod="openstack/keystone-cron-29416801-xnmtd" Dec 06 08:01:00 crc kubenswrapper[4823]: I1206 08:01:00.374725 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1137268d-39c5-4e9e-ba13-73c618ea210e-fernet-keys\") pod \"keystone-cron-29416801-xnmtd\" (UID: \"1137268d-39c5-4e9e-ba13-73c618ea210e\") " pod="openstack/keystone-cron-29416801-xnmtd" Dec 06 08:01:00 crc kubenswrapper[4823]: I1206 08:01:00.374934 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1137268d-39c5-4e9e-ba13-73c618ea210e-combined-ca-bundle\") pod \"keystone-cron-29416801-xnmtd\" (UID: \"1137268d-39c5-4e9e-ba13-73c618ea210e\") " pod="openstack/keystone-cron-29416801-xnmtd" Dec 06 08:01:00 crc kubenswrapper[4823]: I1206 08:01:00.375025 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn6ft\" (UniqueName: \"kubernetes.io/projected/1137268d-39c5-4e9e-ba13-73c618ea210e-kube-api-access-kn6ft\") pod \"keystone-cron-29416801-xnmtd\" (UID: \"1137268d-39c5-4e9e-ba13-73c618ea210e\") " pod="openstack/keystone-cron-29416801-xnmtd" Dec 06 08:01:00 crc kubenswrapper[4823]: I1206 08:01:00.375103 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1137268d-39c5-4e9e-ba13-73c618ea210e-config-data\") pod \"keystone-cron-29416801-xnmtd\" (UID: \"1137268d-39c5-4e9e-ba13-73c618ea210e\") " pod="openstack/keystone-cron-29416801-xnmtd" Dec 06 08:01:00 crc kubenswrapper[4823]: I1206 08:01:00.385218 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1137268d-39c5-4e9e-ba13-73c618ea210e-fernet-keys\") pod \"keystone-cron-29416801-xnmtd\" (UID: \"1137268d-39c5-4e9e-ba13-73c618ea210e\") " pod="openstack/keystone-cron-29416801-xnmtd" Dec 06 08:01:00 crc kubenswrapper[4823]: I1206 08:01:00.387401 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1137268d-39c5-4e9e-ba13-73c618ea210e-combined-ca-bundle\") pod \"keystone-cron-29416801-xnmtd\" (UID: \"1137268d-39c5-4e9e-ba13-73c618ea210e\") " pod="openstack/keystone-cron-29416801-xnmtd" Dec 06 08:01:00 crc kubenswrapper[4823]: I1206 08:01:00.395799 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1137268d-39c5-4e9e-ba13-73c618ea210e-config-data\") pod \"keystone-cron-29416801-xnmtd\" (UID: \"1137268d-39c5-4e9e-ba13-73c618ea210e\") " pod="openstack/keystone-cron-29416801-xnmtd" Dec 06 08:01:00 crc kubenswrapper[4823]: I1206 08:01:00.408400 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn6ft\" (UniqueName: \"kubernetes.io/projected/1137268d-39c5-4e9e-ba13-73c618ea210e-kube-api-access-kn6ft\") pod \"keystone-cron-29416801-xnmtd\" (UID: \"1137268d-39c5-4e9e-ba13-73c618ea210e\") " pod="openstack/keystone-cron-29416801-xnmtd" Dec 06 08:01:00 crc kubenswrapper[4823]: I1206 08:01:00.506183 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416801-xnmtd" Dec 06 08:01:04 crc kubenswrapper[4823]: W1206 08:01:04.287823 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1137268d_39c5_4e9e_ba13_73c618ea210e.slice/crio-2a6db656237bae553ebced3b64656d81aaa7e1ebfdf4c3d4fdb64e3ba2078b20 WatchSource:0}: Error finding container 2a6db656237bae553ebced3b64656d81aaa7e1ebfdf4c3d4fdb64e3ba2078b20: Status 404 returned error can't find the container with id 2a6db656237bae553ebced3b64656d81aaa7e1ebfdf4c3d4fdb64e3ba2078b20 Dec 06 08:01:04 crc kubenswrapper[4823]: I1206 08:01:04.297045 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29416801-xnmtd"] Dec 06 08:01:04 crc kubenswrapper[4823]: I1206 08:01:04.318330 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416801-xnmtd" event={"ID":"1137268d-39c5-4e9e-ba13-73c618ea210e","Type":"ContainerStarted","Data":"2a6db656237bae553ebced3b64656d81aaa7e1ebfdf4c3d4fdb64e3ba2078b20"} Dec 06 08:01:05 crc kubenswrapper[4823]: I1206 08:01:05.331055 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jmqgf/must-gather-zwlf6" event={"ID":"68e86d3f-c3d7-451b-9c68-d318eb241e87","Type":"ContainerStarted","Data":"8cac78f41bb58c87b1d1508bbd80443931505dedf6130d7150af3815d94e1227"} Dec 06 08:01:05 crc kubenswrapper[4823]: I1206 08:01:05.331413 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jmqgf/must-gather-zwlf6" event={"ID":"68e86d3f-c3d7-451b-9c68-d318eb241e87","Type":"ContainerStarted","Data":"4653bd470364978983bfe1277e3ad4df54423bfa182e4be99d0e0162a7f2662f"} Dec 06 08:01:05 crc kubenswrapper[4823]: I1206 08:01:05.333122 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416801-xnmtd" event={"ID":"1137268d-39c5-4e9e-ba13-73c618ea210e","Type":"ContainerStarted","Data":"e23db0c7da0cb285741f543e9ba2bd9f40f3181940f6f932b70c3c9fdb46cacd"} Dec 06 08:01:05 crc kubenswrapper[4823]: I1206 08:01:05.363595 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jmqgf/must-gather-zwlf6" podStartSLOduration=2.521208727 podStartE2EDuration="9.363573584s" podCreationTimestamp="2025-12-06 08:00:56 +0000 UTC" firstStartedPulling="2025-12-06 08:00:57.061517372 +0000 UTC m=+5758.347269322" lastFinishedPulling="2025-12-06 08:01:03.903882219 +0000 UTC m=+5765.189634179" observedRunningTime="2025-12-06 08:01:05.352682151 +0000 UTC m=+5766.638434111" watchObservedRunningTime="2025-12-06 08:01:05.363573584 +0000 UTC m=+5766.649325544" Dec 06 08:01:05 crc kubenswrapper[4823]: I1206 08:01:05.377341 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29416801-xnmtd" podStartSLOduration=5.377316919 podStartE2EDuration="5.377316919s" podCreationTimestamp="2025-12-06 08:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 08:01:05.366592471 +0000 UTC m=+5766.652344431" watchObservedRunningTime="2025-12-06 08:01:05.377316919 +0000 UTC m=+5766.663068879" Dec 06 08:01:06 crc kubenswrapper[4823]: I1206 08:01:06.141461 4823 scope.go:117] "RemoveContainer" containerID="8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" Dec 06 08:01:06 crc kubenswrapper[4823]: E1206 08:01:06.142161 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:01:09 crc kubenswrapper[4823]: I1206 08:01:09.198566 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jmqgf/crc-debug-h5m5s"] Dec 06 08:01:09 crc kubenswrapper[4823]: I1206 08:01:09.200704 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jmqgf/crc-debug-h5m5s" Dec 06 08:01:09 crc kubenswrapper[4823]: I1206 08:01:09.378218 4823 generic.go:334] "Generic (PLEG): container finished" podID="1137268d-39c5-4e9e-ba13-73c618ea210e" containerID="e23db0c7da0cb285741f543e9ba2bd9f40f3181940f6f932b70c3c9fdb46cacd" exitCode=0 Dec 06 08:01:09 crc kubenswrapper[4823]: I1206 08:01:09.378303 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416801-xnmtd" event={"ID":"1137268d-39c5-4e9e-ba13-73c618ea210e","Type":"ContainerDied","Data":"e23db0c7da0cb285741f543e9ba2bd9f40f3181940f6f932b70c3c9fdb46cacd"} Dec 06 08:01:09 crc kubenswrapper[4823]: I1206 08:01:09.381695 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l79rs\" (UniqueName: \"kubernetes.io/projected/64d20b83-5fdb-4b60-bd88-c7ac3ca9154a-kube-api-access-l79rs\") pod \"crc-debug-h5m5s\" (UID: \"64d20b83-5fdb-4b60-bd88-c7ac3ca9154a\") " pod="openshift-must-gather-jmqgf/crc-debug-h5m5s" Dec 06 08:01:09 crc kubenswrapper[4823]: I1206 08:01:09.381780 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/64d20b83-5fdb-4b60-bd88-c7ac3ca9154a-host\") pod \"crc-debug-h5m5s\" (UID: \"64d20b83-5fdb-4b60-bd88-c7ac3ca9154a\") " pod="openshift-must-gather-jmqgf/crc-debug-h5m5s" Dec 06 08:01:09 crc kubenswrapper[4823]: I1206 08:01:09.483758 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/64d20b83-5fdb-4b60-bd88-c7ac3ca9154a-host\") pod \"crc-debug-h5m5s\" (UID: \"64d20b83-5fdb-4b60-bd88-c7ac3ca9154a\") " pod="openshift-must-gather-jmqgf/crc-debug-h5m5s" Dec 06 08:01:09 crc kubenswrapper[4823]: I1206 08:01:09.483909 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/64d20b83-5fdb-4b60-bd88-c7ac3ca9154a-host\") pod \"crc-debug-h5m5s\" (UID: \"64d20b83-5fdb-4b60-bd88-c7ac3ca9154a\") " pod="openshift-must-gather-jmqgf/crc-debug-h5m5s" Dec 06 08:01:09 crc kubenswrapper[4823]: I1206 08:01:09.484031 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l79rs\" (UniqueName: \"kubernetes.io/projected/64d20b83-5fdb-4b60-bd88-c7ac3ca9154a-kube-api-access-l79rs\") pod \"crc-debug-h5m5s\" (UID: \"64d20b83-5fdb-4b60-bd88-c7ac3ca9154a\") " pod="openshift-must-gather-jmqgf/crc-debug-h5m5s" Dec 06 08:01:09 crc kubenswrapper[4823]: I1206 08:01:09.503598 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l79rs\" (UniqueName: \"kubernetes.io/projected/64d20b83-5fdb-4b60-bd88-c7ac3ca9154a-kube-api-access-l79rs\") pod \"crc-debug-h5m5s\" (UID: \"64d20b83-5fdb-4b60-bd88-c7ac3ca9154a\") " pod="openshift-must-gather-jmqgf/crc-debug-h5m5s" Dec 06 08:01:09 crc kubenswrapper[4823]: I1206 08:01:09.560695 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jmqgf/crc-debug-h5m5s" Dec 06 08:01:09 crc kubenswrapper[4823]: W1206 08:01:09.596460 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64d20b83_5fdb_4b60_bd88_c7ac3ca9154a.slice/crio-b8bb81debd5fe31bb653aed6dc3f0e0def25ffa58e55f9549495e76b9e5ed067 WatchSource:0}: Error finding container b8bb81debd5fe31bb653aed6dc3f0e0def25ffa58e55f9549495e76b9e5ed067: Status 404 returned error can't find the container with id b8bb81debd5fe31bb653aed6dc3f0e0def25ffa58e55f9549495e76b9e5ed067 Dec 06 08:01:10 crc kubenswrapper[4823]: I1206 08:01:10.391572 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jmqgf/crc-debug-h5m5s" event={"ID":"64d20b83-5fdb-4b60-bd88-c7ac3ca9154a","Type":"ContainerStarted","Data":"b8bb81debd5fe31bb653aed6dc3f0e0def25ffa58e55f9549495e76b9e5ed067"} Dec 06 08:01:10 crc kubenswrapper[4823]: I1206 08:01:10.890600 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416801-xnmtd" Dec 06 08:01:11 crc kubenswrapper[4823]: I1206 08:01:11.038073 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn6ft\" (UniqueName: \"kubernetes.io/projected/1137268d-39c5-4e9e-ba13-73c618ea210e-kube-api-access-kn6ft\") pod \"1137268d-39c5-4e9e-ba13-73c618ea210e\" (UID: \"1137268d-39c5-4e9e-ba13-73c618ea210e\") " Dec 06 08:01:11 crc kubenswrapper[4823]: I1206 08:01:11.038210 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1137268d-39c5-4e9e-ba13-73c618ea210e-combined-ca-bundle\") pod \"1137268d-39c5-4e9e-ba13-73c618ea210e\" (UID: \"1137268d-39c5-4e9e-ba13-73c618ea210e\") " Dec 06 08:01:11 crc kubenswrapper[4823]: I1206 08:01:11.038317 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1137268d-39c5-4e9e-ba13-73c618ea210e-config-data\") pod \"1137268d-39c5-4e9e-ba13-73c618ea210e\" (UID: \"1137268d-39c5-4e9e-ba13-73c618ea210e\") " Dec 06 08:01:11 crc kubenswrapper[4823]: I1206 08:01:11.038453 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1137268d-39c5-4e9e-ba13-73c618ea210e-fernet-keys\") pod \"1137268d-39c5-4e9e-ba13-73c618ea210e\" (UID: \"1137268d-39c5-4e9e-ba13-73c618ea210e\") " Dec 06 08:01:11 crc kubenswrapper[4823]: I1206 08:01:11.044525 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1137268d-39c5-4e9e-ba13-73c618ea210e-kube-api-access-kn6ft" (OuterVolumeSpecName: "kube-api-access-kn6ft") pod "1137268d-39c5-4e9e-ba13-73c618ea210e" (UID: "1137268d-39c5-4e9e-ba13-73c618ea210e"). InnerVolumeSpecName "kube-api-access-kn6ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:01:11 crc kubenswrapper[4823]: I1206 08:01:11.047014 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1137268d-39c5-4e9e-ba13-73c618ea210e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1137268d-39c5-4e9e-ba13-73c618ea210e" (UID: "1137268d-39c5-4e9e-ba13-73c618ea210e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 08:01:11 crc kubenswrapper[4823]: I1206 08:01:11.090798 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1137268d-39c5-4e9e-ba13-73c618ea210e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1137268d-39c5-4e9e-ba13-73c618ea210e" (UID: "1137268d-39c5-4e9e-ba13-73c618ea210e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 08:01:11 crc kubenswrapper[4823]: I1206 08:01:11.113896 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1137268d-39c5-4e9e-ba13-73c618ea210e-config-data" (OuterVolumeSpecName: "config-data") pod "1137268d-39c5-4e9e-ba13-73c618ea210e" (UID: "1137268d-39c5-4e9e-ba13-73c618ea210e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 08:01:11 crc kubenswrapper[4823]: I1206 08:01:11.141262 4823 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1137268d-39c5-4e9e-ba13-73c618ea210e-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 06 08:01:11 crc kubenswrapper[4823]: I1206 08:01:11.141292 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn6ft\" (UniqueName: \"kubernetes.io/projected/1137268d-39c5-4e9e-ba13-73c618ea210e-kube-api-access-kn6ft\") on node \"crc\" DevicePath \"\"" Dec 06 08:01:11 crc kubenswrapper[4823]: I1206 08:01:11.141303 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1137268d-39c5-4e9e-ba13-73c618ea210e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 08:01:11 crc kubenswrapper[4823]: I1206 08:01:11.141314 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1137268d-39c5-4e9e-ba13-73c618ea210e-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 08:01:11 crc kubenswrapper[4823]: I1206 08:01:11.403155 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416801-xnmtd" event={"ID":"1137268d-39c5-4e9e-ba13-73c618ea210e","Type":"ContainerDied","Data":"2a6db656237bae553ebced3b64656d81aaa7e1ebfdf4c3d4fdb64e3ba2078b20"} Dec 06 08:01:11 crc kubenswrapper[4823]: I1206 08:01:11.403216 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a6db656237bae553ebced3b64656d81aaa7e1ebfdf4c3d4fdb64e3ba2078b20" Dec 06 08:01:11 crc kubenswrapper[4823]: I1206 08:01:11.403241 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416801-xnmtd" Dec 06 08:01:18 crc kubenswrapper[4823]: I1206 08:01:18.141268 4823 scope.go:117] "RemoveContainer" containerID="8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" Dec 06 08:01:18 crc kubenswrapper[4823]: E1206 08:01:18.142319 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:01:22 crc kubenswrapper[4823]: I1206 08:01:22.520441 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jmqgf/crc-debug-h5m5s" event={"ID":"64d20b83-5fdb-4b60-bd88-c7ac3ca9154a","Type":"ContainerStarted","Data":"c012d5d826685f84e0ba7cd1261a0769a3274e53116927847a88f2df78507289"} Dec 06 08:01:22 crc kubenswrapper[4823]: I1206 08:01:22.540451 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jmqgf/crc-debug-h5m5s" podStartSLOduration=1.378887539 podStartE2EDuration="13.540429125s" podCreationTimestamp="2025-12-06 08:01:09 +0000 UTC" firstStartedPulling="2025-12-06 08:01:09.599347844 +0000 UTC m=+5770.885099804" lastFinishedPulling="2025-12-06 08:01:21.76088942 +0000 UTC m=+5783.046641390" observedRunningTime="2025-12-06 08:01:22.537898332 +0000 UTC m=+5783.823650292" watchObservedRunningTime="2025-12-06 08:01:22.540429125 +0000 UTC m=+5783.826181085" Dec 06 08:01:30 crc kubenswrapper[4823]: I1206 08:01:30.151446 4823 scope.go:117] "RemoveContainer" containerID="8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" Dec 06 08:01:30 crc kubenswrapper[4823]: E1206 08:01:30.160624 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:01:44 crc kubenswrapper[4823]: I1206 08:01:44.141543 4823 scope.go:117] "RemoveContainer" containerID="8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" Dec 06 08:01:44 crc kubenswrapper[4823]: E1206 08:01:44.142489 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:01:59 crc kubenswrapper[4823]: I1206 08:01:59.149720 4823 scope.go:117] "RemoveContainer" containerID="8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" Dec 06 08:01:59 crc kubenswrapper[4823]: E1206 08:01:59.151573 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:02:12 crc kubenswrapper[4823]: I1206 08:02:12.143521 4823 scope.go:117] "RemoveContainer" containerID="8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" Dec 06 08:02:12 crc kubenswrapper[4823]: E1206 08:02:12.144375 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:02:16 crc kubenswrapper[4823]: I1206 08:02:16.166272 4823 generic.go:334] "Generic (PLEG): container finished" podID="64d20b83-5fdb-4b60-bd88-c7ac3ca9154a" containerID="c012d5d826685f84e0ba7cd1261a0769a3274e53116927847a88f2df78507289" exitCode=0 Dec 06 08:02:16 crc kubenswrapper[4823]: I1206 08:02:16.166356 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jmqgf/crc-debug-h5m5s" event={"ID":"64d20b83-5fdb-4b60-bd88-c7ac3ca9154a","Type":"ContainerDied","Data":"c012d5d826685f84e0ba7cd1261a0769a3274e53116927847a88f2df78507289"} Dec 06 08:02:17 crc kubenswrapper[4823]: I1206 08:02:17.295558 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jmqgf/crc-debug-h5m5s" Dec 06 08:02:17 crc kubenswrapper[4823]: I1206 08:02:17.337524 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jmqgf/crc-debug-h5m5s"] Dec 06 08:02:17 crc kubenswrapper[4823]: I1206 08:02:17.347089 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jmqgf/crc-debug-h5m5s"] Dec 06 08:02:17 crc kubenswrapper[4823]: I1206 08:02:17.409254 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/64d20b83-5fdb-4b60-bd88-c7ac3ca9154a-host\") pod \"64d20b83-5fdb-4b60-bd88-c7ac3ca9154a\" (UID: \"64d20b83-5fdb-4b60-bd88-c7ac3ca9154a\") " Dec 06 08:02:17 crc kubenswrapper[4823]: I1206 08:02:17.409705 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l79rs\" (UniqueName: \"kubernetes.io/projected/64d20b83-5fdb-4b60-bd88-c7ac3ca9154a-kube-api-access-l79rs\") pod \"64d20b83-5fdb-4b60-bd88-c7ac3ca9154a\" (UID: \"64d20b83-5fdb-4b60-bd88-c7ac3ca9154a\") " Dec 06 08:02:17 crc kubenswrapper[4823]: I1206 08:02:17.409399 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64d20b83-5fdb-4b60-bd88-c7ac3ca9154a-host" (OuterVolumeSpecName: "host") pod "64d20b83-5fdb-4b60-bd88-c7ac3ca9154a" (UID: "64d20b83-5fdb-4b60-bd88-c7ac3ca9154a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 08:02:17 crc kubenswrapper[4823]: I1206 08:02:17.410819 4823 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/64d20b83-5fdb-4b60-bd88-c7ac3ca9154a-host\") on node \"crc\" DevicePath \"\"" Dec 06 08:02:17 crc kubenswrapper[4823]: I1206 08:02:17.416347 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64d20b83-5fdb-4b60-bd88-c7ac3ca9154a-kube-api-access-l79rs" (OuterVolumeSpecName: "kube-api-access-l79rs") pod "64d20b83-5fdb-4b60-bd88-c7ac3ca9154a" (UID: "64d20b83-5fdb-4b60-bd88-c7ac3ca9154a"). InnerVolumeSpecName "kube-api-access-l79rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:02:17 crc kubenswrapper[4823]: I1206 08:02:17.512595 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l79rs\" (UniqueName: \"kubernetes.io/projected/64d20b83-5fdb-4b60-bd88-c7ac3ca9154a-kube-api-access-l79rs\") on node \"crc\" DevicePath \"\"" Dec 06 08:02:18 crc kubenswrapper[4823]: I1206 08:02:18.185204 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8bb81debd5fe31bb653aed6dc3f0e0def25ffa58e55f9549495e76b9e5ed067" Dec 06 08:02:18 crc kubenswrapper[4823]: I1206 08:02:18.185246 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jmqgf/crc-debug-h5m5s" Dec 06 08:02:18 crc kubenswrapper[4823]: I1206 08:02:18.519088 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jmqgf/crc-debug-rctwm"] Dec 06 08:02:18 crc kubenswrapper[4823]: E1206 08:02:18.519499 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64d20b83-5fdb-4b60-bd88-c7ac3ca9154a" containerName="container-00" Dec 06 08:02:18 crc kubenswrapper[4823]: I1206 08:02:18.519511 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="64d20b83-5fdb-4b60-bd88-c7ac3ca9154a" containerName="container-00" Dec 06 08:02:18 crc kubenswrapper[4823]: E1206 08:02:18.519528 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1137268d-39c5-4e9e-ba13-73c618ea210e" containerName="keystone-cron" Dec 06 08:02:18 crc kubenswrapper[4823]: I1206 08:02:18.519534 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="1137268d-39c5-4e9e-ba13-73c618ea210e" containerName="keystone-cron" Dec 06 08:02:18 crc kubenswrapper[4823]: I1206 08:02:18.519779 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="64d20b83-5fdb-4b60-bd88-c7ac3ca9154a" containerName="container-00" Dec 06 08:02:18 crc kubenswrapper[4823]: I1206 08:02:18.519791 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="1137268d-39c5-4e9e-ba13-73c618ea210e" containerName="keystone-cron" Dec 06 08:02:18 crc kubenswrapper[4823]: I1206 08:02:18.520536 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jmqgf/crc-debug-rctwm" Dec 06 08:02:18 crc kubenswrapper[4823]: I1206 08:02:18.633434 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66rnd\" (UniqueName: \"kubernetes.io/projected/de1decd5-a1a6-4c62-869a-b989b8379edf-kube-api-access-66rnd\") pod \"crc-debug-rctwm\" (UID: \"de1decd5-a1a6-4c62-869a-b989b8379edf\") " pod="openshift-must-gather-jmqgf/crc-debug-rctwm" Dec 06 08:02:18 crc kubenswrapper[4823]: I1206 08:02:18.633523 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/de1decd5-a1a6-4c62-869a-b989b8379edf-host\") pod \"crc-debug-rctwm\" (UID: \"de1decd5-a1a6-4c62-869a-b989b8379edf\") " pod="openshift-must-gather-jmqgf/crc-debug-rctwm" Dec 06 08:02:18 crc kubenswrapper[4823]: I1206 08:02:18.735906 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66rnd\" (UniqueName: \"kubernetes.io/projected/de1decd5-a1a6-4c62-869a-b989b8379edf-kube-api-access-66rnd\") pod \"crc-debug-rctwm\" (UID: \"de1decd5-a1a6-4c62-869a-b989b8379edf\") " pod="openshift-must-gather-jmqgf/crc-debug-rctwm" Dec 06 08:02:18 crc kubenswrapper[4823]: I1206 08:02:18.736200 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/de1decd5-a1a6-4c62-869a-b989b8379edf-host\") pod \"crc-debug-rctwm\" (UID: \"de1decd5-a1a6-4c62-869a-b989b8379edf\") " pod="openshift-must-gather-jmqgf/crc-debug-rctwm" Dec 06 08:02:18 crc kubenswrapper[4823]: I1206 08:02:18.736366 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/de1decd5-a1a6-4c62-869a-b989b8379edf-host\") pod \"crc-debug-rctwm\" (UID: \"de1decd5-a1a6-4c62-869a-b989b8379edf\") " pod="openshift-must-gather-jmqgf/crc-debug-rctwm" Dec 06 08:02:18 crc kubenswrapper[4823]: I1206 08:02:18.755296 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66rnd\" (UniqueName: \"kubernetes.io/projected/de1decd5-a1a6-4c62-869a-b989b8379edf-kube-api-access-66rnd\") pod \"crc-debug-rctwm\" (UID: \"de1decd5-a1a6-4c62-869a-b989b8379edf\") " pod="openshift-must-gather-jmqgf/crc-debug-rctwm" Dec 06 08:02:18 crc kubenswrapper[4823]: I1206 08:02:18.855629 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jmqgf/crc-debug-rctwm" Dec 06 08:02:19 crc kubenswrapper[4823]: I1206 08:02:19.153008 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64d20b83-5fdb-4b60-bd88-c7ac3ca9154a" path="/var/lib/kubelet/pods/64d20b83-5fdb-4b60-bd88-c7ac3ca9154a/volumes" Dec 06 08:02:19 crc kubenswrapper[4823]: I1206 08:02:19.196616 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jmqgf/crc-debug-rctwm" event={"ID":"de1decd5-a1a6-4c62-869a-b989b8379edf","Type":"ContainerStarted","Data":"f91862b8a34bf6f89cfe944e4405a96ab4164e4dea127c95803a7e27fe6d764a"} Dec 06 08:02:20 crc kubenswrapper[4823]: I1206 08:02:20.207471 4823 generic.go:334] "Generic (PLEG): container finished" podID="de1decd5-a1a6-4c62-869a-b989b8379edf" containerID="61f82de5e54a338b4a5edbe49f329b7245399a61cbaea3344f802b99b4bd00f5" exitCode=0 Dec 06 08:02:20 crc kubenswrapper[4823]: I1206 08:02:20.207529 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jmqgf/crc-debug-rctwm" event={"ID":"de1decd5-a1a6-4c62-869a-b989b8379edf","Type":"ContainerDied","Data":"61f82de5e54a338b4a5edbe49f329b7245399a61cbaea3344f802b99b4bd00f5"} Dec 06 08:02:21 crc kubenswrapper[4823]: I1206 08:02:21.337203 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jmqgf/crc-debug-rctwm" Dec 06 08:02:21 crc kubenswrapper[4823]: I1206 08:02:21.493247 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66rnd\" (UniqueName: \"kubernetes.io/projected/de1decd5-a1a6-4c62-869a-b989b8379edf-kube-api-access-66rnd\") pod \"de1decd5-a1a6-4c62-869a-b989b8379edf\" (UID: \"de1decd5-a1a6-4c62-869a-b989b8379edf\") " Dec 06 08:02:21 crc kubenswrapper[4823]: I1206 08:02:21.493402 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/de1decd5-a1a6-4c62-869a-b989b8379edf-host\") pod \"de1decd5-a1a6-4c62-869a-b989b8379edf\" (UID: \"de1decd5-a1a6-4c62-869a-b989b8379edf\") " Dec 06 08:02:21 crc kubenswrapper[4823]: I1206 08:02:21.493603 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de1decd5-a1a6-4c62-869a-b989b8379edf-host" (OuterVolumeSpecName: "host") pod "de1decd5-a1a6-4c62-869a-b989b8379edf" (UID: "de1decd5-a1a6-4c62-869a-b989b8379edf"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 08:02:21 crc kubenswrapper[4823]: I1206 08:02:21.494053 4823 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/de1decd5-a1a6-4c62-869a-b989b8379edf-host\") on node \"crc\" DevicePath \"\"" Dec 06 08:02:21 crc kubenswrapper[4823]: I1206 08:02:21.501081 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de1decd5-a1a6-4c62-869a-b989b8379edf-kube-api-access-66rnd" (OuterVolumeSpecName: "kube-api-access-66rnd") pod "de1decd5-a1a6-4c62-869a-b989b8379edf" (UID: "de1decd5-a1a6-4c62-869a-b989b8379edf"). InnerVolumeSpecName "kube-api-access-66rnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:02:21 crc kubenswrapper[4823]: I1206 08:02:21.595570 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66rnd\" (UniqueName: \"kubernetes.io/projected/de1decd5-a1a6-4c62-869a-b989b8379edf-kube-api-access-66rnd\") on node \"crc\" DevicePath \"\"" Dec 06 08:02:22 crc kubenswrapper[4823]: I1206 08:02:22.233047 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jmqgf/crc-debug-rctwm" event={"ID":"de1decd5-a1a6-4c62-869a-b989b8379edf","Type":"ContainerDied","Data":"f91862b8a34bf6f89cfe944e4405a96ab4164e4dea127c95803a7e27fe6d764a"} Dec 06 08:02:22 crc kubenswrapper[4823]: I1206 08:02:22.233085 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jmqgf/crc-debug-rctwm" Dec 06 08:02:22 crc kubenswrapper[4823]: I1206 08:02:22.233102 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f91862b8a34bf6f89cfe944e4405a96ab4164e4dea127c95803a7e27fe6d764a" Dec 06 08:02:22 crc kubenswrapper[4823]: I1206 08:02:22.644772 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jmqgf/crc-debug-rctwm"] Dec 06 08:02:22 crc kubenswrapper[4823]: I1206 08:02:22.655313 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jmqgf/crc-debug-rctwm"] Dec 06 08:02:23 crc kubenswrapper[4823]: I1206 08:02:23.153997 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de1decd5-a1a6-4c62-869a-b989b8379edf" path="/var/lib/kubelet/pods/de1decd5-a1a6-4c62-869a-b989b8379edf/volumes" Dec 06 08:02:23 crc kubenswrapper[4823]: I1206 08:02:23.827718 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jmqgf/crc-debug-thpx4"] Dec 06 08:02:23 crc kubenswrapper[4823]: E1206 08:02:23.828710 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1decd5-a1a6-4c62-869a-b989b8379edf" containerName="container-00" Dec 06 08:02:23 crc kubenswrapper[4823]: I1206 08:02:23.828724 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1decd5-a1a6-4c62-869a-b989b8379edf" containerName="container-00" Dec 06 08:02:23 crc kubenswrapper[4823]: I1206 08:02:23.829227 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="de1decd5-a1a6-4c62-869a-b989b8379edf" containerName="container-00" Dec 06 08:02:23 crc kubenswrapper[4823]: I1206 08:02:23.830318 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jmqgf/crc-debug-thpx4" Dec 06 08:02:23 crc kubenswrapper[4823]: I1206 08:02:23.958552 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/595d588c-b71d-440b-8e0f-818da7bf0df9-host\") pod \"crc-debug-thpx4\" (UID: \"595d588c-b71d-440b-8e0f-818da7bf0df9\") " pod="openshift-must-gather-jmqgf/crc-debug-thpx4" Dec 06 08:02:23 crc kubenswrapper[4823]: I1206 08:02:23.958763 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cdjg\" (UniqueName: \"kubernetes.io/projected/595d588c-b71d-440b-8e0f-818da7bf0df9-kube-api-access-2cdjg\") pod \"crc-debug-thpx4\" (UID: \"595d588c-b71d-440b-8e0f-818da7bf0df9\") " pod="openshift-must-gather-jmqgf/crc-debug-thpx4" Dec 06 08:02:24 crc kubenswrapper[4823]: I1206 08:02:24.060828 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/595d588c-b71d-440b-8e0f-818da7bf0df9-host\") pod \"crc-debug-thpx4\" (UID: \"595d588c-b71d-440b-8e0f-818da7bf0df9\") " pod="openshift-must-gather-jmqgf/crc-debug-thpx4" Dec 06 08:02:24 crc kubenswrapper[4823]: I1206 08:02:24.060996 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cdjg\" (UniqueName: \"kubernetes.io/projected/595d588c-b71d-440b-8e0f-818da7bf0df9-kube-api-access-2cdjg\") pod \"crc-debug-thpx4\" (UID: \"595d588c-b71d-440b-8e0f-818da7bf0df9\") " pod="openshift-must-gather-jmqgf/crc-debug-thpx4" Dec 06 08:02:24 crc kubenswrapper[4823]: I1206 08:02:24.061043 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/595d588c-b71d-440b-8e0f-818da7bf0df9-host\") pod \"crc-debug-thpx4\" (UID: \"595d588c-b71d-440b-8e0f-818da7bf0df9\") " pod="openshift-must-gather-jmqgf/crc-debug-thpx4" Dec 06 08:02:24 crc kubenswrapper[4823]: I1206 08:02:24.087092 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cdjg\" (UniqueName: \"kubernetes.io/projected/595d588c-b71d-440b-8e0f-818da7bf0df9-kube-api-access-2cdjg\") pod \"crc-debug-thpx4\" (UID: \"595d588c-b71d-440b-8e0f-818da7bf0df9\") " pod="openshift-must-gather-jmqgf/crc-debug-thpx4" Dec 06 08:02:24 crc kubenswrapper[4823]: I1206 08:02:24.151726 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jmqgf/crc-debug-thpx4" Dec 06 08:02:24 crc kubenswrapper[4823]: I1206 08:02:24.270877 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jmqgf/crc-debug-thpx4" event={"ID":"595d588c-b71d-440b-8e0f-818da7bf0df9","Type":"ContainerStarted","Data":"a05a640264086847c2c7f67564879148455900af53729998997422d231a6d401"} Dec 06 08:02:25 crc kubenswrapper[4823]: I1206 08:02:25.282703 4823 generic.go:334] "Generic (PLEG): container finished" podID="595d588c-b71d-440b-8e0f-818da7bf0df9" containerID="bf8f2e0f9862a6ff07e767bb075210aa22b807fcef852ce046252b72ae3f219d" exitCode=0 Dec 06 08:02:25 crc kubenswrapper[4823]: I1206 08:02:25.282803 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jmqgf/crc-debug-thpx4" event={"ID":"595d588c-b71d-440b-8e0f-818da7bf0df9","Type":"ContainerDied","Data":"bf8f2e0f9862a6ff07e767bb075210aa22b807fcef852ce046252b72ae3f219d"} Dec 06 08:02:25 crc kubenswrapper[4823]: I1206 08:02:25.349298 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jmqgf/crc-debug-thpx4"] Dec 06 08:02:25 crc kubenswrapper[4823]: I1206 08:02:25.360754 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jmqgf/crc-debug-thpx4"] Dec 06 08:02:26 crc kubenswrapper[4823]: I1206 08:02:26.428392 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jmqgf/crc-debug-thpx4" Dec 06 08:02:26 crc kubenswrapper[4823]: I1206 08:02:26.611140 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/595d588c-b71d-440b-8e0f-818da7bf0df9-host\") pod \"595d588c-b71d-440b-8e0f-818da7bf0df9\" (UID: \"595d588c-b71d-440b-8e0f-818da7bf0df9\") " Dec 06 08:02:26 crc kubenswrapper[4823]: I1206 08:02:26.611396 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cdjg\" (UniqueName: \"kubernetes.io/projected/595d588c-b71d-440b-8e0f-818da7bf0df9-kube-api-access-2cdjg\") pod \"595d588c-b71d-440b-8e0f-818da7bf0df9\" (UID: \"595d588c-b71d-440b-8e0f-818da7bf0df9\") " Dec 06 08:02:26 crc kubenswrapper[4823]: I1206 08:02:26.612657 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/595d588c-b71d-440b-8e0f-818da7bf0df9-host" (OuterVolumeSpecName: "host") pod "595d588c-b71d-440b-8e0f-818da7bf0df9" (UID: "595d588c-b71d-440b-8e0f-818da7bf0df9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 08:02:26 crc kubenswrapper[4823]: I1206 08:02:26.626488 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/595d588c-b71d-440b-8e0f-818da7bf0df9-kube-api-access-2cdjg" (OuterVolumeSpecName: "kube-api-access-2cdjg") pod "595d588c-b71d-440b-8e0f-818da7bf0df9" (UID: "595d588c-b71d-440b-8e0f-818da7bf0df9"). InnerVolumeSpecName "kube-api-access-2cdjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:02:26 crc kubenswrapper[4823]: I1206 08:02:26.715512 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cdjg\" (UniqueName: \"kubernetes.io/projected/595d588c-b71d-440b-8e0f-818da7bf0df9-kube-api-access-2cdjg\") on node \"crc\" DevicePath \"\"" Dec 06 08:02:26 crc kubenswrapper[4823]: I1206 08:02:26.715571 4823 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/595d588c-b71d-440b-8e0f-818da7bf0df9-host\") on node \"crc\" DevicePath \"\"" Dec 06 08:02:27 crc kubenswrapper[4823]: I1206 08:02:27.141410 4823 scope.go:117] "RemoveContainer" containerID="8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" Dec 06 08:02:27 crc kubenswrapper[4823]: E1206 08:02:27.142145 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:02:27 crc kubenswrapper[4823]: I1206 08:02:27.156123 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="595d588c-b71d-440b-8e0f-818da7bf0df9" path="/var/lib/kubelet/pods/595d588c-b71d-440b-8e0f-818da7bf0df9/volumes" Dec 06 08:02:27 crc kubenswrapper[4823]: I1206 08:02:27.303124 4823 scope.go:117] "RemoveContainer" containerID="bf8f2e0f9862a6ff07e767bb075210aa22b807fcef852ce046252b72ae3f219d" Dec 06 08:02:27 crc kubenswrapper[4823]: I1206 08:02:27.303351 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jmqgf/crc-debug-thpx4" Dec 06 08:02:39 crc kubenswrapper[4823]: I1206 08:02:39.182827 4823 scope.go:117] "RemoveContainer" containerID="8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" Dec 06 08:02:39 crc kubenswrapper[4823]: E1206 08:02:39.183523 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:02:51 crc kubenswrapper[4823]: I1206 08:02:51.147349 4823 scope.go:117] "RemoveContainer" containerID="8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" Dec 06 08:02:51 crc kubenswrapper[4823]: E1206 08:02:51.148141 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:02:54 crc kubenswrapper[4823]: I1206 08:02:54.802602 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-576979cb46-vpljd_d367d201-b052-4399-999b-a10e9b8a515f/barbican-api-log/0.log" Dec 06 08:02:54 crc kubenswrapper[4823]: I1206 08:02:54.829111 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-576979cb46-vpljd_d367d201-b052-4399-999b-a10e9b8a515f/barbican-api/0.log" Dec 06 08:02:55 crc kubenswrapper[4823]: I1206 08:02:55.025737 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7fcb8dc678-hn4ms_ab1d7d34-2799-4553-9895-57c3c573cda2/barbican-keystone-listener/0.log" Dec 06 08:02:55 crc kubenswrapper[4823]: I1206 08:02:55.196440 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7fcb8dc678-hn4ms_ab1d7d34-2799-4553-9895-57c3c573cda2/barbican-keystone-listener-log/0.log" Dec 06 08:02:55 crc kubenswrapper[4823]: I1206 08:02:55.602570 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-689f45894f-mlpws_76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9/barbican-worker/0.log" Dec 06 08:02:55 crc kubenswrapper[4823]: I1206 08:02:55.669103 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-689f45894f-mlpws_76e106cf-a3c3-4af1-a57e-6fd0bcfb56f9/barbican-worker-log/0.log" Dec 06 08:02:55 crc kubenswrapper[4823]: I1206 08:02:55.721649 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-zw4nk_05c11f0c-8eda-4110-b929-b1ef19924e5e/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 08:02:56 crc kubenswrapper[4823]: I1206 08:02:56.032264 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8e1d1477-a236-458f-9b57-1d74fc56a92d/ceilometer-notification-agent/0.log" Dec 06 08:02:56 crc kubenswrapper[4823]: I1206 08:02:56.069731 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8e1d1477-a236-458f-9b57-1d74fc56a92d/proxy-httpd/0.log" Dec 06 08:02:56 crc kubenswrapper[4823]: I1206 08:02:56.089602 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8e1d1477-a236-458f-9b57-1d74fc56a92d/ceilometer-central-agent/0.log" Dec 06 08:02:56 crc kubenswrapper[4823]: I1206 08:02:56.261754 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8e1d1477-a236-458f-9b57-1d74fc56a92d/sg-core/0.log" Dec 06 08:02:56 crc kubenswrapper[4823]: I1206 08:02:56.377717 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_533482ee-6e34-4054-85ed-96df7676e1ab/cinder-api-log/0.log" Dec 06 08:02:56 crc kubenswrapper[4823]: I1206 08:02:56.724570 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_533482ee-6e34-4054-85ed-96df7676e1ab/cinder-api/0.log" Dec 06 08:02:56 crc kubenswrapper[4823]: I1206 08:02:56.792171 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_5a942075-9497-4ebd-958d-14ea50b6558a/probe/0.log" Dec 06 08:02:56 crc kubenswrapper[4823]: I1206 08:02:56.984643 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_5a942075-9497-4ebd-958d-14ea50b6558a/cinder-backup/0.log" Dec 06 08:02:57 crc kubenswrapper[4823]: I1206 08:02:57.083886 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9b5dc60b-23c7-4e50-8944-3917a44ad224/cinder-scheduler/0.log" Dec 06 08:02:57 crc kubenswrapper[4823]: I1206 08:02:57.118810 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9b5dc60b-23c7-4e50-8944-3917a44ad224/probe/0.log" Dec 06 08:02:57 crc kubenswrapper[4823]: I1206 08:02:57.455989 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_c006d7d9-a998-4d47-97e6-5f81a6c75c0e/probe/0.log" Dec 06 08:02:57 crc kubenswrapper[4823]: I1206 08:02:57.540321 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_c006d7d9-a998-4d47-97e6-5f81a6c75c0e/cinder-volume/0.log" Dec 06 08:02:57 crc kubenswrapper[4823]: I1206 08:02:57.777522 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_a27453d1-4b9a-481c-8145-5a31f7876f97/cinder-volume/0.log" Dec 06 08:02:57 crc kubenswrapper[4823]: I1206 08:02:57.855759 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_a27453d1-4b9a-481c-8145-5a31f7876f97/probe/0.log" Dec 06 08:02:57 crc kubenswrapper[4823]: I1206 08:02:57.978683 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-6876q_f09400da-5834-4f03-8212-4c4a27edbe13/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 08:02:58 crc kubenswrapper[4823]: I1206 08:02:58.056525 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-56wlv_c2f3406e-802c-4387-90f6-51980c01408a/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 08:02:58 crc kubenswrapper[4823]: I1206 08:02:58.095000 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-755bd4b5c7-97dgz_709c2986-1fcb-419b-9d05-2afed5c1542b/init/0.log" Dec 06 08:02:58 crc kubenswrapper[4823]: I1206 08:02:58.320417 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-755bd4b5c7-97dgz_709c2986-1fcb-419b-9d05-2afed5c1542b/init/0.log" Dec 06 08:02:58 crc kubenswrapper[4823]: I1206 08:02:58.349962 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-mvmf5_7567b412-7ee9-413a-999b-ac4525e10bfa/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 08:02:58 crc kubenswrapper[4823]: I1206 08:02:58.545918 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_047c6c9f-696a-47a0-9adb-3dca69a83eea/glance-log/0.log" Dec 06 08:02:58 crc kubenswrapper[4823]: I1206 08:02:58.555146 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-755bd4b5c7-97dgz_709c2986-1fcb-419b-9d05-2afed5c1542b/dnsmasq-dns/0.log" Dec 06 08:02:58 crc kubenswrapper[4823]: I1206 08:02:58.579678 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_047c6c9f-696a-47a0-9adb-3dca69a83eea/glance-httpd/0.log" Dec 06 08:02:58 crc kubenswrapper[4823]: I1206 08:02:58.719821 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_780274e1-f304-47a7-81ad-933887d54459/glance-httpd/0.log" Dec 06 08:02:58 crc kubenswrapper[4823]: I1206 08:02:58.774102 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_780274e1-f304-47a7-81ad-933887d54459/glance-log/0.log" Dec 06 08:02:58 crc kubenswrapper[4823]: I1206 08:02:58.921519 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5dcc5c8c58-p6xlr_4f410137-3943-4e5f-890f-d7f54e165884/horizon/0.log" Dec 06 08:02:59 crc kubenswrapper[4823]: I1206 08:02:59.356369 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-4gh5f_908d817e-af62-4f73-a91d-c005192b813c/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 08:02:59 crc kubenswrapper[4823]: I1206 08:02:59.504742 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-lhwv8_49f9da65-c637-468d-b0e6-7e8f3a9c6a6a/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 08:02:59 crc kubenswrapper[4823]: I1206 08:02:59.510763 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5dcc5c8c58-p6xlr_4f410137-3943-4e5f-890f-d7f54e165884/horizon-log/0.log" Dec 06 08:02:59 crc kubenswrapper[4823]: I1206 08:02:59.775213 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29416801-xnmtd_1137268d-39c5-4e9e-ba13-73c618ea210e/keystone-cron/0.log" Dec 06 08:02:59 crc kubenswrapper[4823]: I1206 08:02:59.832962 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29416741-47p2f_817eedb8-20c3-48ab-b610-60b5a06ee67f/keystone-cron/0.log" Dec 06 08:03:00 crc kubenswrapper[4823]: I1206 08:03:00.024581 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_62548e33-ebf2-47ed-b520-84fb85791699/kube-state-metrics/0.log" Dec 06 08:03:00 crc kubenswrapper[4823]: I1206 08:03:00.030900 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-664df9559f-rrrdk_8f6baa5f-6712-4665-9753-9a98f2bc5595/keystone-api/0.log" Dec 06 08:03:00 crc kubenswrapper[4823]: I1206 08:03:00.114316 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-7dszq_f6c15da0-c5c9-4ef5-affe-f1cc8ed5ba19/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 08:03:00 crc kubenswrapper[4823]: I1206 08:03:00.701293 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-5w4l2_87f31194-7aad-4688-88b3-41c9ac8c2a6f/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 08:03:00 crc kubenswrapper[4823]: I1206 08:03:00.750222 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6649b9bbc9-h8s24_e2bf62ef-c456-4bf0-a670-a39f3b3a7079/neutron-httpd/0.log" Dec 06 08:03:00 crc kubenswrapper[4823]: I1206 08:03:00.766496 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6649b9bbc9-h8s24_e2bf62ef-c456-4bf0-a670-a39f3b3a7079/neutron-api/0.log" Dec 06 08:03:01 crc kubenswrapper[4823]: I1206 08:03:01.319639 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_1a74136e-ea73-45df-b31c-494fa24fecf8/nova-cell0-conductor-conductor/0.log" Dec 06 08:03:01 crc kubenswrapper[4823]: I1206 08:03:01.806085 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_e153aa97-4c79-491e-8392-cd40d3a40d19/nova-cell1-conductor-conductor/0.log" Dec 06 08:03:02 crc kubenswrapper[4823]: I1206 08:03:02.008256 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_f81822cf-636b-4865-8ceb-e97e6a0f29c3/nova-cell1-novncproxy-novncproxy/0.log" Dec 06 08:03:02 crc kubenswrapper[4823]: I1206 08:03:02.140426 4823 scope.go:117] "RemoveContainer" containerID="8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" Dec 06 08:03:02 crc kubenswrapper[4823]: E1206 08:03:02.140724 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:03:02 crc kubenswrapper[4823]: I1206 08:03:02.350161 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8cc04758-e28e-4ed1-8abb-e2cc94b0662c/nova-api-log/0.log" Dec 06 08:03:02 crc kubenswrapper[4823]: I1206 08:03:02.372307 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-gbdl6_63f55880-0615-44ec-a7b5-318e731d45c1/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 08:03:02 crc kubenswrapper[4823]: I1206 08:03:02.600787 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb/nova-metadata-log/0.log" Dec 06 08:03:02 crc kubenswrapper[4823]: I1206 08:03:02.885048 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8cc04758-e28e-4ed1-8abb-e2cc94b0662c/nova-api-api/0.log" Dec 06 08:03:03 crc kubenswrapper[4823]: I1206 08:03:03.149135 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8e707833-acd6-49f7-91f7-a3ddd3a40119/mysql-bootstrap/0.log" Dec 06 08:03:03 crc kubenswrapper[4823]: I1206 08:03:03.350218 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8e707833-acd6-49f7-91f7-a3ddd3a40119/mysql-bootstrap/0.log" Dec 06 08:03:03 crc kubenswrapper[4823]: I1206 08:03:03.412112 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8e707833-acd6-49f7-91f7-a3ddd3a40119/galera/0.log" Dec 06 08:03:03 crc kubenswrapper[4823]: I1206 08:03:03.442717 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_20589719-3a87-43d3-bc79-0450142879ab/nova-scheduler-scheduler/0.log" Dec 06 08:03:03 crc kubenswrapper[4823]: I1206 08:03:03.680450 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9da6c764-c7e5-4b0b-9d9f-8a5904f84187/mysql-bootstrap/0.log" Dec 06 08:03:03 crc kubenswrapper[4823]: I1206 08:03:03.890332 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9da6c764-c7e5-4b0b-9d9f-8a5904f84187/mysql-bootstrap/0.log" Dec 06 08:03:03 crc kubenswrapper[4823]: I1206 08:03:03.895191 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9da6c764-c7e5-4b0b-9d9f-8a5904f84187/galera/0.log" Dec 06 08:03:04 crc kubenswrapper[4823]: I1206 08:03:04.215504 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-94t86_afe6c323-7053-4b9e-af90-27bb99d99ae3/ovn-controller/0.log" Dec 06 08:03:04 crc kubenswrapper[4823]: I1206 08:03:04.216116 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_bc03fdf8-c76b-4330-b7fb-58142df075c3/openstackclient/0.log" Dec 06 08:03:04 crc kubenswrapper[4823]: I1206 08:03:04.413200 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-zv8r9_89a1992b-4562-4786-8e44-c95f760d1205/openstack-network-exporter/0.log" Dec 06 08:03:04 crc kubenswrapper[4823]: I1206 08:03:04.622536 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-s2c88_5dfb6e3c-4b92-4e55-9c69-679dc2326717/ovsdb-server-init/0.log" Dec 06 08:03:04 crc kubenswrapper[4823]: I1206 08:03:04.818028 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-s2c88_5dfb6e3c-4b92-4e55-9c69-679dc2326717/ovsdb-server/0.log" Dec 06 08:03:04 crc kubenswrapper[4823]: I1206 08:03:04.826991 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_06a30022-1a67-4812-941e-3118f3767d35/memcached/0.log" Dec 06 08:03:04 crc kubenswrapper[4823]: I1206 08:03:04.837297 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-s2c88_5dfb6e3c-4b92-4e55-9c69-679dc2326717/ovsdb-server-init/0.log" Dec 06 08:03:05 crc kubenswrapper[4823]: I1206 08:03:05.078056 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b0f2f0fc-78ca-43b1-bfa7-3f86823f98cb/nova-metadata-metadata/0.log" Dec 06 08:03:05 crc kubenswrapper[4823]: I1206 08:03:05.105710 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-gpxd6_8a1945af-9fc9-4571-bd52-c93277ed8c64/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 08:03:05 crc kubenswrapper[4823]: I1206 08:03:05.135888 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-s2c88_5dfb6e3c-4b92-4e55-9c69-679dc2326717/ovs-vswitchd/0.log" Dec 06 08:03:05 crc kubenswrapper[4823]: I1206 08:03:05.253774 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_37bce594-0e2b-42f2-affd-892bd457c1b2/ovn-northd/0.log" Dec 06 08:03:05 crc kubenswrapper[4823]: I1206 08:03:05.287417 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_37bce594-0e2b-42f2-affd-892bd457c1b2/openstack-network-exporter/0.log" Dec 06 08:03:05 crc kubenswrapper[4823]: I1206 08:03:05.327725 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1e7b2f78-e48e-40c3-a0e9-d1b78608da3e/openstack-network-exporter/0.log" Dec 06 08:03:05 crc kubenswrapper[4823]: I1206 08:03:05.382354 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1e7b2f78-e48e-40c3-a0e9-d1b78608da3e/ovsdbserver-nb/0.log" Dec 06 08:03:05 crc kubenswrapper[4823]: I1206 08:03:05.490079 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0db13557-99bd-4223-a8f1-53de273f6ba3/openstack-network-exporter/0.log" Dec 06 08:03:05 crc kubenswrapper[4823]: I1206 08:03:05.635627 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0db13557-99bd-4223-a8f1-53de273f6ba3/ovsdbserver-sb/0.log" Dec 06 08:03:05 crc kubenswrapper[4823]: I1206 08:03:05.723084 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-769d66bc44-mzlht_dc00597e-057e-4c1b-83aa-435c9e5184be/placement-api/0.log" Dec 06 08:03:05 crc kubenswrapper[4823]: I1206 08:03:05.865245 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8896582a-b688-4a50-8d29-ff8d5faefb5c/init-config-reloader/0.log" Dec 06 08:03:05 crc kubenswrapper[4823]: I1206 08:03:05.908651 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-769d66bc44-mzlht_dc00597e-057e-4c1b-83aa-435c9e5184be/placement-log/0.log" Dec 06 08:03:06 crc kubenswrapper[4823]: I1206 08:03:06.057070 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8896582a-b688-4a50-8d29-ff8d5faefb5c/prometheus/0.log" Dec 06 08:03:06 crc kubenswrapper[4823]: I1206 08:03:06.066981 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8896582a-b688-4a50-8d29-ff8d5faefb5c/init-config-reloader/0.log" Dec 06 08:03:06 crc kubenswrapper[4823]: I1206 08:03:06.086617 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8896582a-b688-4a50-8d29-ff8d5faefb5c/config-reloader/0.log" Dec 06 08:03:06 crc kubenswrapper[4823]: I1206 08:03:06.124899 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8896582a-b688-4a50-8d29-ff8d5faefb5c/thanos-sidecar/0.log" Dec 06 08:03:06 crc kubenswrapper[4823]: I1206 08:03:06.246685 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_98d25d92-00b6-4897-b3df-0976c9c3a8eb/setup-container/0.log" Dec 06 08:03:06 crc kubenswrapper[4823]: I1206 08:03:06.407275 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_98d25d92-00b6-4897-b3df-0976c9c3a8eb/setup-container/0.log" Dec 06 08:03:06 crc kubenswrapper[4823]: I1206 08:03:06.439815 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_b6649430-bcca-4949-82d4-f15ac31f36e1/setup-container/0.log" Dec 06 08:03:06 crc kubenswrapper[4823]: I1206 08:03:06.461086 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_98d25d92-00b6-4897-b3df-0976c9c3a8eb/rabbitmq/0.log" Dec 06 08:03:06 crc kubenswrapper[4823]: I1206 08:03:06.641924 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_b6649430-bcca-4949-82d4-f15ac31f36e1/setup-container/0.log" Dec 06 08:03:06 crc kubenswrapper[4823]: I1206 08:03:06.672420 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7/setup-container/0.log" Dec 06 08:03:06 crc kubenswrapper[4823]: I1206 08:03:06.703215 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_b6649430-bcca-4949-82d4-f15ac31f36e1/rabbitmq/0.log" Dec 06 08:03:06 crc kubenswrapper[4823]: I1206 08:03:06.848584 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7/setup-container/0.log" Dec 06 08:03:06 crc kubenswrapper[4823]: I1206 08:03:06.897502 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1f3464a5-fe04-4d3a-a7bb-eb86bc0482c7/rabbitmq/0.log" Dec 06 08:03:06 crc kubenswrapper[4823]: I1206 08:03:06.966463 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-kh9bg_e4591025-d216-4f7e-8054-7f9cfcc90bfd/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 08:03:07 crc kubenswrapper[4823]: I1206 08:03:07.048965 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-r428q_cfaf0ca0-ad9a-4bf3-b013-5c6798c1e488/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 08:03:07 crc kubenswrapper[4823]: I1206 08:03:07.196185 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-575pg_8a037ce0-c728-4523-b34b-9add69b94c18/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 08:03:07 crc kubenswrapper[4823]: I1206 08:03:07.304177 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-skk4j_acd2b596-5f29-44a7-9946-5027a36dd330/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 08:03:07 crc kubenswrapper[4823]: I1206 08:03:07.382519 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-m249v_11dfacbe-1b10-4f76-8cbd-2a272679c18c/ssh-known-hosts-edpm-deployment/0.log" Dec 06 08:03:07 crc kubenswrapper[4823]: I1206 08:03:07.643825 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6bcdffb5bf-b97n9_58b74f3f-7d40-4aae-a70c-95ff51beca50/proxy-server/0.log" Dec 06 08:03:07 crc kubenswrapper[4823]: I1206 08:03:07.693561 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6bcdffb5bf-b97n9_58b74f3f-7d40-4aae-a70c-95ff51beca50/proxy-httpd/0.log" Dec 06 08:03:07 crc kubenswrapper[4823]: I1206 08:03:07.694330 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-rl26v_a539f115-9bb8-4282-9f99-c198920d4bb9/swift-ring-rebalance/0.log" Dec 06 08:03:07 crc kubenswrapper[4823]: I1206 08:03:07.896689 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5/account-reaper/0.log" Dec 06 08:03:07 crc kubenswrapper[4823]: I1206 08:03:07.899277 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5/account-auditor/0.log" Dec 06 08:03:07 crc kubenswrapper[4823]: I1206 08:03:07.933855 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5/account-replicator/0.log" Dec 06 08:03:07 crc kubenswrapper[4823]: I1206 08:03:07.969380 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5/account-server/0.log" Dec 06 08:03:08 crc kubenswrapper[4823]: I1206 08:03:08.027045 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5/container-auditor/0.log" Dec 06 08:03:08 crc kubenswrapper[4823]: I1206 08:03:08.124067 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5/container-server/0.log" Dec 06 08:03:08 crc kubenswrapper[4823]: I1206 08:03:08.150975 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5/container-replicator/0.log" Dec 06 08:03:08 crc kubenswrapper[4823]: I1206 08:03:08.152332 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5/object-auditor/0.log" Dec 06 08:03:08 crc kubenswrapper[4823]: I1206 08:03:08.170412 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5/container-updater/0.log" Dec 06 08:03:08 crc kubenswrapper[4823]: I1206 08:03:08.221225 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5/object-expirer/0.log" Dec 06 08:03:08 crc kubenswrapper[4823]: I1206 08:03:08.339134 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5/object-replicator/0.log" Dec 06 08:03:08 crc kubenswrapper[4823]: I1206 08:03:08.374653 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5/object-server/0.log" Dec 06 08:03:08 crc kubenswrapper[4823]: I1206 08:03:08.400602 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5/rsync/0.log" Dec 06 08:03:08 crc kubenswrapper[4823]: I1206 08:03:08.407922 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5/swift-recon-cron/0.log" Dec 06 08:03:08 crc kubenswrapper[4823]: I1206 08:03:08.413576 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df5b8da6-1e4a-4d07-a1af-3a4ab2aa2ce5/object-updater/0.log" Dec 06 08:03:08 crc kubenswrapper[4823]: I1206 08:03:08.638051 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-8xwhs_b7b49501-c951-4829-8791-c27d6e01a606/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 08:03:08 crc kubenswrapper[4823]: I1206 08:03:08.790852 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_57365a20-b2a5-4f40-be8c-5f70d739cfd3/test-operator-logs-container/0.log" Dec 06 08:03:08 crc kubenswrapper[4823]: I1206 08:03:08.857638 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-ffbch_eed68a6c-a7de-40de-8617-34b66781ec31/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 08:03:08 crc kubenswrapper[4823]: I1206 08:03:08.970713 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_bc939bd4-7c0b-4783-a90c-cb9791a86c9f/tempest-tests-tempest-tests-runner/0.log" Dec 06 08:03:09 crc kubenswrapper[4823]: I1206 08:03:09.797639 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_e1b8f909-82f7-4db2-872c-52810a5fb3ab/watcher-applier/0.log" Dec 06 08:03:10 crc kubenswrapper[4823]: I1206 08:03:10.471402 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_a9651ba4-0674-42c6-bd38-cd1d83e8a0d7/watcher-api-log/0.log" Dec 06 08:03:12 crc kubenswrapper[4823]: I1206 08:03:12.733064 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_4327f7bf-de7b-4f43-adae-01332719c72d/watcher-decision-engine/0.log" Dec 06 08:03:14 crc kubenswrapper[4823]: I1206 08:03:14.005731 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_a9651ba4-0674-42c6-bd38-cd1d83e8a0d7/watcher-api/0.log" Dec 06 08:03:15 crc kubenswrapper[4823]: I1206 08:03:15.140703 4823 scope.go:117] "RemoveContainer" containerID="8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" Dec 06 08:03:15 crc kubenswrapper[4823]: E1206 08:03:15.141313 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:03:28 crc kubenswrapper[4823]: I1206 08:03:28.141286 4823 scope.go:117] "RemoveContainer" containerID="8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" Dec 06 08:03:28 crc kubenswrapper[4823]: E1206 08:03:28.142079 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:03:35 crc kubenswrapper[4823]: I1206 08:03:35.917733 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw_6933e1c2-852c-4eab-9956-c93bc9027c9d/util/0.log" Dec 06 08:03:36 crc kubenswrapper[4823]: I1206 08:03:36.167139 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw_6933e1c2-852c-4eab-9956-c93bc9027c9d/pull/0.log" Dec 06 08:03:36 crc kubenswrapper[4823]: I1206 08:03:36.175551 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw_6933e1c2-852c-4eab-9956-c93bc9027c9d/pull/0.log" Dec 06 08:03:36 crc kubenswrapper[4823]: I1206 08:03:36.204122 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw_6933e1c2-852c-4eab-9956-c93bc9027c9d/util/0.log" Dec 06 08:03:36 crc kubenswrapper[4823]: I1206 08:03:36.391434 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw_6933e1c2-852c-4eab-9956-c93bc9027c9d/util/0.log" Dec 06 08:03:36 crc kubenswrapper[4823]: I1206 08:03:36.413078 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw_6933e1c2-852c-4eab-9956-c93bc9027c9d/pull/0.log" Dec 06 08:03:36 crc kubenswrapper[4823]: I1206 08:03:36.419922 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_230cf8dac84f86224db2e0dae570e07c340f5fecfbe623956c9d81d1dc697lw_6933e1c2-852c-4eab-9956-c93bc9027c9d/extract/0.log" Dec 06 08:03:36 crc kubenswrapper[4823]: I1206 08:03:36.620457 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-4xsdc_af7acc94-0229-4055-b0ea-e5646c927e7a/kube-rbac-proxy/0.log" Dec 06 08:03:36 crc kubenswrapper[4823]: I1206 08:03:36.741264 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-5jsvb_3ade4605-3b89-4c0e-a05c-b0d7d6ee66bf/kube-rbac-proxy/0.log" Dec 06 08:03:36 crc kubenswrapper[4823]: I1206 08:03:36.752850 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-4xsdc_af7acc94-0229-4055-b0ea-e5646c927e7a/manager/0.log" Dec 06 08:03:36 crc kubenswrapper[4823]: I1206 08:03:36.920354 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-5jsvb_3ade4605-3b89-4c0e-a05c-b0d7d6ee66bf/manager/0.log" Dec 06 08:03:36 crc kubenswrapper[4823]: I1206 08:03:36.968039 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-9r9sg_69d7c5b3-6bb3-4545-bcf3-9613f979646d/kube-rbac-proxy/0.log" Dec 06 08:03:37 crc kubenswrapper[4823]: I1206 08:03:37.037493 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-9r9sg_69d7c5b3-6bb3-4545-bcf3-9613f979646d/manager/0.log" Dec 06 08:03:37 crc kubenswrapper[4823]: I1206 08:03:37.237178 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-mwbpk_9bc807b4-b176-4249-9610-b4c92f99fb0b/kube-rbac-proxy/0.log" Dec 06 08:03:37 crc kubenswrapper[4823]: I1206 08:03:37.323746 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-mwbpk_9bc807b4-b176-4249-9610-b4c92f99fb0b/manager/0.log" Dec 06 08:03:37 crc kubenswrapper[4823]: I1206 08:03:37.408273 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-4fdh4_25f101a2-6154-43f7-b4ef-2679a4ebacc9/kube-rbac-proxy/0.log" Dec 06 08:03:37 crc kubenswrapper[4823]: I1206 08:03:37.412256 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-4fdh4_25f101a2-6154-43f7-b4ef-2679a4ebacc9/manager/0.log" Dec 06 08:03:37 crc kubenswrapper[4823]: I1206 08:03:37.522989 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-nggsj_22c2c4cb-ba18-4f49-9986-9095779c93dc/kube-rbac-proxy/0.log" Dec 06 08:03:37 crc kubenswrapper[4823]: I1206 08:03:37.663054 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-nggsj_22c2c4cb-ba18-4f49-9986-9095779c93dc/manager/0.log" Dec 06 08:03:37 crc kubenswrapper[4823]: I1206 08:03:37.755463 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-7lkmh_4001a5be-6496-49c2-971c-50723e76c864/kube-rbac-proxy/0.log" Dec 06 08:03:37 crc kubenswrapper[4823]: I1206 08:03:37.951839 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-d4pqr_e98ba71e-3a94-4c9e-b82a-e18dcb197cf9/kube-rbac-proxy/0.log" Dec 06 08:03:38 crc kubenswrapper[4823]: I1206 08:03:38.019615 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-d4pqr_e98ba71e-3a94-4c9e-b82a-e18dcb197cf9/manager/0.log" Dec 06 08:03:38 crc kubenswrapper[4823]: I1206 08:03:38.025420 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-7lkmh_4001a5be-6496-49c2-971c-50723e76c864/manager/0.log" Dec 06 08:03:38 crc kubenswrapper[4823]: I1206 08:03:38.183304 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-z7czp_cb125116-0c3b-4831-a05c-9076f5360e28/kube-rbac-proxy/0.log" Dec 06 08:03:38 crc kubenswrapper[4823]: I1206 08:03:38.341315 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-z7czp_cb125116-0c3b-4831-a05c-9076f5360e28/manager/0.log" Dec 06 08:03:38 crc kubenswrapper[4823]: I1206 08:03:38.407123 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-n56m7_147b67a9-b422-48ba-b948-a1b42946ef1d/kube-rbac-proxy/0.log" Dec 06 08:03:38 crc kubenswrapper[4823]: I1206 08:03:38.424999 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-n56m7_147b67a9-b422-48ba-b948-a1b42946ef1d/manager/0.log" Dec 06 08:03:38 crc kubenswrapper[4823]: I1206 08:03:38.535062 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-m9pvc_03d20c66-aa09-43f5-848a-b352868fb3de/kube-rbac-proxy/0.log" Dec 06 08:03:38 crc kubenswrapper[4823]: I1206 08:03:38.593202 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-m9pvc_03d20c66-aa09-43f5-848a-b352868fb3de/manager/0.log" Dec 06 08:03:38 crc kubenswrapper[4823]: I1206 08:03:38.710485 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-ncd4b_25eb7fcd-3634-4e2d-b2b3-2f15f9b0bfb4/kube-rbac-proxy/0.log" Dec 06 08:03:38 crc kubenswrapper[4823]: I1206 08:03:38.892915 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-ncd4b_25eb7fcd-3634-4e2d-b2b3-2f15f9b0bfb4/manager/0.log" Dec 06 08:03:38 crc kubenswrapper[4823]: I1206 08:03:38.921430 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-jdgsz_a72ff6fc-2086-4e96-9bc7-7298a0304e5e/kube-rbac-proxy/0.log" Dec 06 08:03:39 crc kubenswrapper[4823]: I1206 08:03:39.099003 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-jdgsz_a72ff6fc-2086-4e96-9bc7-7298a0304e5e/manager/0.log" Dec 06 08:03:39 crc kubenswrapper[4823]: I1206 08:03:39.119949 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-b45jg_2c435a39-34e9-4d43-bff4-4f5d5a7f1275/kube-rbac-proxy/0.log" Dec 06 08:03:39 crc kubenswrapper[4823]: I1206 08:03:39.131614 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-b45jg_2c435a39-34e9-4d43-bff4-4f5d5a7f1275/manager/0.log" Dec 06 08:03:39 crc kubenswrapper[4823]: I1206 08:03:39.297593 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb_0055dc6b-eac6-40aa-adad-1a5202efabb7/kube-rbac-proxy/0.log" Dec 06 08:03:39 crc kubenswrapper[4823]: I1206 08:03:39.333232 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4bbkpb_0055dc6b-eac6-40aa-adad-1a5202efabb7/manager/0.log" Dec 06 08:03:39 crc kubenswrapper[4823]: I1206 08:03:39.750561 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-kmmpn_dfbacef0-81cd-45dd-870f-ca9b9a506529/registry-server/0.log" Dec 06 08:03:39 crc kubenswrapper[4823]: I1206 08:03:39.944390 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-ggt2m_424f7266-0185-4f27-9de3-1daf6a06dd2c/kube-rbac-proxy/0.log" Dec 06 08:03:39 crc kubenswrapper[4823]: I1206 08:03:39.951309 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-f4b959fdf-fzm4b_f3b9d10e-c904-4cef-aad2-1d9428fc198d/operator/0.log" Dec 06 08:03:40 crc kubenswrapper[4823]: I1206 08:03:40.015444 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-ggt2m_424f7266-0185-4f27-9de3-1daf6a06dd2c/manager/0.log" Dec 06 08:03:40 crc kubenswrapper[4823]: I1206 08:03:40.140341 4823 scope.go:117] "RemoveContainer" containerID="8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" Dec 06 08:03:40 crc kubenswrapper[4823]: E1206 08:03:40.140714 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:03:40 crc kubenswrapper[4823]: I1206 08:03:40.181810 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-9mbh5_d50c6d95-dbef-423c-8094-f8a1634d9b72/manager/0.log" Dec 06 08:03:40 crc kubenswrapper[4823]: I1206 08:03:40.188071 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-9mbh5_d50c6d95-dbef-423c-8094-f8a1634d9b72/kube-rbac-proxy/0.log" Dec 06 08:03:40 crc kubenswrapper[4823]: I1206 08:03:40.405007 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-z92rh_86d88b9b-a5a9-47e0-bfdb-381ef80693f3/operator/0.log" Dec 06 08:03:40 crc kubenswrapper[4823]: I1206 08:03:40.405901 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-j45x4_b7fb4033-737a-4492-a5fd-422532e0c693/kube-rbac-proxy/0.log" Dec 06 08:03:40 crc kubenswrapper[4823]: I1206 08:03:40.617494 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-j45x4_b7fb4033-737a-4492-a5fd-422532e0c693/manager/0.log" Dec 06 08:03:40 crc kubenswrapper[4823]: I1206 08:03:40.626672 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-nb62h_18059fdc-d882-485f-9de3-0567bac485ba/kube-rbac-proxy/0.log" Dec 06 08:03:40 crc kubenswrapper[4823]: I1206 08:03:40.932104 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-vmvpr_433b05ca-a4e2-4e7f-96d2-53e6efb9efc7/manager/0.log" Dec 06 08:03:40 crc kubenswrapper[4823]: I1206 08:03:40.943993 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-vmvpr_433b05ca-a4e2-4e7f-96d2-53e6efb9efc7/kube-rbac-proxy/0.log" Dec 06 08:03:41 crc kubenswrapper[4823]: I1206 08:03:41.065564 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-nb62h_18059fdc-d882-485f-9de3-0567bac485ba/manager/0.log" Dec 06 08:03:41 crc kubenswrapper[4823]: I1206 08:03:41.223185 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6dd68fb56b-hkzrc_d98bfe02-e1d8-4bdf-a2e2-cf9a83964511/kube-rbac-proxy/0.log" Dec 06 08:03:41 crc kubenswrapper[4823]: I1206 08:03:41.309012 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6dd68fb56b-hkzrc_d98bfe02-e1d8-4bdf-a2e2-cf9a83964511/manager/0.log" Dec 06 08:03:41 crc kubenswrapper[4823]: I1206 08:03:41.395575 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75cbb7bbf4-bcdjh_78374f83-e964-486e-9590-b6bb562a5185/manager/0.log" Dec 06 08:03:52 crc kubenswrapper[4823]: I1206 08:03:52.141581 4823 scope.go:117] "RemoveContainer" containerID="8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" Dec 06 08:03:52 crc kubenswrapper[4823]: E1206 08:03:52.142476 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:04:02 crc kubenswrapper[4823]: I1206 08:04:02.665979 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xnpns_b8b36d44-6eab-4f81-bd7c-a0887b7ba1bc/control-plane-machine-set-operator/0.log" Dec 06 08:04:02 crc kubenswrapper[4823]: I1206 08:04:02.843398 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5rbww_913bebf0-c7cd-40f4-b429-fe18368c8076/kube-rbac-proxy/0.log" Dec 06 08:04:02 crc kubenswrapper[4823]: I1206 08:04:02.843918 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5rbww_913bebf0-c7cd-40f4-b429-fe18368c8076/machine-api-operator/0.log" Dec 06 08:04:03 crc kubenswrapper[4823]: I1206 08:04:03.141521 4823 scope.go:117] "RemoveContainer" containerID="8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" Dec 06 08:04:03 crc kubenswrapper[4823]: E1206 08:04:03.141834 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:04:16 crc kubenswrapper[4823]: I1206 08:04:16.207414 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-gzrvf_16f9d7d8-2452-47c9-ad9a-468a067e74bc/cert-manager-controller/0.log" Dec 06 08:04:16 crc kubenswrapper[4823]: I1206 08:04:16.387984 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-sg6d6_8633755c-f571-4f49-bb10-a2ce86967ce6/cert-manager-cainjector/0.log" Dec 06 08:04:16 crc kubenswrapper[4823]: I1206 08:04:16.440676 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-l8zkg_7e6a87fb-3fc3-426b-b8fc-3bec076c5544/cert-manager-webhook/0.log" Dec 06 08:04:17 crc kubenswrapper[4823]: I1206 08:04:17.140684 4823 scope.go:117] "RemoveContainer" containerID="8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" Dec 06 08:04:17 crc kubenswrapper[4823]: E1206 08:04:17.141158 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:04:29 crc kubenswrapper[4823]: I1206 08:04:29.991169 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-hnxhv_e25ab073-dc82-4437-b4b4-e74f7a063f35/nmstate-console-plugin/0.log" Dec 06 08:04:30 crc kubenswrapper[4823]: I1206 08:04:30.176935 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-b6h6k_bf97aa79-e20e-4304-84e5-abd78e1de48a/nmstate-handler/0.log" Dec 06 08:04:30 crc kubenswrapper[4823]: I1206 08:04:30.294631 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-5wb79_efd022a3-2590-4bb1-93ff-b194f6451b5f/kube-rbac-proxy/0.log" Dec 06 08:04:30 crc kubenswrapper[4823]: I1206 08:04:30.370484 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-5wb79_efd022a3-2590-4bb1-93ff-b194f6451b5f/nmstate-metrics/0.log" Dec 06 08:04:30 crc kubenswrapper[4823]: I1206 08:04:30.424083 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-z7fqk_e25db4e0-af48-4315-b9f0-ee0a2d774e46/nmstate-operator/0.log" Dec 06 08:04:30 crc kubenswrapper[4823]: I1206 08:04:30.563081 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-vjn99_c2ecc92a-539d-4a78-93d0-ca682b8d76a3/nmstate-webhook/0.log" Dec 06 08:04:32 crc kubenswrapper[4823]: I1206 08:04:32.141199 4823 scope.go:117] "RemoveContainer" containerID="8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" Dec 06 08:04:32 crc kubenswrapper[4823]: E1206 08:04:32.141795 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:04:45 crc kubenswrapper[4823]: I1206 08:04:45.747014 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-jg4v8_3c12f95f-8514-4b08-8177-d95f8b0bc24d/kube-rbac-proxy/0.log" Dec 06 08:04:45 crc kubenswrapper[4823]: I1206 08:04:45.749336 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-jg4v8_3c12f95f-8514-4b08-8177-d95f8b0bc24d/controller/0.log" Dec 06 08:04:45 crc kubenswrapper[4823]: I1206 08:04:45.927918 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw4mq_eb5ef3cd-9337-4665-945a-403b2619c53d/cp-frr-files/0.log" Dec 06 08:04:46 crc kubenswrapper[4823]: I1206 08:04:46.141888 4823 scope.go:117] "RemoveContainer" containerID="8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" Dec 06 08:04:46 crc kubenswrapper[4823]: E1206 08:04:46.142248 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:04:46 crc kubenswrapper[4823]: I1206 08:04:46.149134 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw4mq_eb5ef3cd-9337-4665-945a-403b2619c53d/cp-frr-files/0.log" Dec 06 08:04:46 crc kubenswrapper[4823]: I1206 08:04:46.158303 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw4mq_eb5ef3cd-9337-4665-945a-403b2619c53d/cp-reloader/0.log" Dec 06 08:04:46 crc kubenswrapper[4823]: I1206 08:04:46.184100 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw4mq_eb5ef3cd-9337-4665-945a-403b2619c53d/cp-reloader/0.log" Dec 06 08:04:46 crc kubenswrapper[4823]: I1206 08:04:46.199029 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw4mq_eb5ef3cd-9337-4665-945a-403b2619c53d/cp-metrics/0.log" Dec 06 08:04:46 crc kubenswrapper[4823]: I1206 08:04:46.381538 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw4mq_eb5ef3cd-9337-4665-945a-403b2619c53d/cp-frr-files/0.log" Dec 06 08:04:46 crc kubenswrapper[4823]: I1206 08:04:46.469184 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw4mq_eb5ef3cd-9337-4665-945a-403b2619c53d/cp-reloader/0.log" Dec 06 08:04:46 crc kubenswrapper[4823]: I1206 08:04:46.485066 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw4mq_eb5ef3cd-9337-4665-945a-403b2619c53d/cp-metrics/0.log" Dec 06 08:04:46 crc kubenswrapper[4823]: I1206 08:04:46.510451 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw4mq_eb5ef3cd-9337-4665-945a-403b2619c53d/cp-metrics/0.log" Dec 06 08:04:46 crc kubenswrapper[4823]: I1206 08:04:46.623948 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw4mq_eb5ef3cd-9337-4665-945a-403b2619c53d/cp-frr-files/0.log" Dec 06 08:04:46 crc kubenswrapper[4823]: I1206 08:04:46.661771 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw4mq_eb5ef3cd-9337-4665-945a-403b2619c53d/cp-reloader/0.log" Dec 06 08:04:46 crc kubenswrapper[4823]: I1206 08:04:46.679717 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw4mq_eb5ef3cd-9337-4665-945a-403b2619c53d/controller/0.log" Dec 06 08:04:46 crc kubenswrapper[4823]: I1206 08:04:46.728693 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw4mq_eb5ef3cd-9337-4665-945a-403b2619c53d/cp-metrics/0.log" Dec 06 08:04:46 crc kubenswrapper[4823]: I1206 08:04:46.856827 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw4mq_eb5ef3cd-9337-4665-945a-403b2619c53d/frr-metrics/0.log" Dec 06 08:04:46 crc kubenswrapper[4823]: I1206 08:04:46.875908 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw4mq_eb5ef3cd-9337-4665-945a-403b2619c53d/kube-rbac-proxy/0.log" Dec 06 08:04:46 crc kubenswrapper[4823]: I1206 08:04:46.962512 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw4mq_eb5ef3cd-9337-4665-945a-403b2619c53d/kube-rbac-proxy-frr/0.log" Dec 06 08:04:47 crc kubenswrapper[4823]: I1206 08:04:47.130281 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw4mq_eb5ef3cd-9337-4665-945a-403b2619c53d/reloader/0.log" Dec 06 08:04:47 crc kubenswrapper[4823]: I1206 08:04:47.261240 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-lsjzk_94cf4797-42d3-4c53-9d68-93210ba23378/frr-k8s-webhook-server/0.log" Dec 06 08:04:47 crc kubenswrapper[4823]: I1206 08:04:47.446520 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-648d7bc7c7-lfcwj_72976e23-4d5d-42d6-9667-ccf6e45411a4/manager/0.log" Dec 06 08:04:47 crc kubenswrapper[4823]: I1206 08:04:47.605742 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-58db4d7bbd-nw4xn_80b20d2f-135b-4bd8-8236-19429964c077/webhook-server/0.log" Dec 06 08:04:47 crc kubenswrapper[4823]: I1206 08:04:47.778085 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-r9ml4_a94e23f2-d423-4414-9eca-532b936de8ae/kube-rbac-proxy/0.log" Dec 06 08:04:48 crc kubenswrapper[4823]: I1206 08:04:48.556082 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-r9ml4_a94e23f2-d423-4414-9eca-532b936de8ae/speaker/0.log" Dec 06 08:04:49 crc kubenswrapper[4823]: I1206 08:04:49.149147 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw4mq_eb5ef3cd-9337-4665-945a-403b2619c53d/frr/0.log" Dec 06 08:04:50 crc kubenswrapper[4823]: I1206 08:04:50.514295 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nm9n6"] Dec 06 08:04:50 crc kubenswrapper[4823]: E1206 08:04:50.515171 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="595d588c-b71d-440b-8e0f-818da7bf0df9" containerName="container-00" Dec 06 08:04:50 crc kubenswrapper[4823]: I1206 08:04:50.515186 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="595d588c-b71d-440b-8e0f-818da7bf0df9" containerName="container-00" Dec 06 08:04:50 crc kubenswrapper[4823]: I1206 08:04:50.515471 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="595d588c-b71d-440b-8e0f-818da7bf0df9" containerName="container-00" Dec 06 08:04:50 crc kubenswrapper[4823]: I1206 08:04:50.517464 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nm9n6" Dec 06 08:04:50 crc kubenswrapper[4823]: I1206 08:04:50.530522 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nm9n6"] Dec 06 08:04:50 crc kubenswrapper[4823]: I1206 08:04:50.585862 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a2130ec-a436-40b9-bdd1-7548668b8f5d-catalog-content\") pod \"redhat-operators-nm9n6\" (UID: \"4a2130ec-a436-40b9-bdd1-7548668b8f5d\") " pod="openshift-marketplace/redhat-operators-nm9n6" Dec 06 08:04:50 crc kubenswrapper[4823]: I1206 08:04:50.585946 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxb4s\" (UniqueName: \"kubernetes.io/projected/4a2130ec-a436-40b9-bdd1-7548668b8f5d-kube-api-access-rxb4s\") pod \"redhat-operators-nm9n6\" (UID: \"4a2130ec-a436-40b9-bdd1-7548668b8f5d\") " pod="openshift-marketplace/redhat-operators-nm9n6" Dec 06 08:04:50 crc kubenswrapper[4823]: I1206 08:04:50.585980 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a2130ec-a436-40b9-bdd1-7548668b8f5d-utilities\") pod \"redhat-operators-nm9n6\" (UID: \"4a2130ec-a436-40b9-bdd1-7548668b8f5d\") " pod="openshift-marketplace/redhat-operators-nm9n6" Dec 06 08:04:50 crc kubenswrapper[4823]: I1206 08:04:50.688379 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a2130ec-a436-40b9-bdd1-7548668b8f5d-catalog-content\") pod \"redhat-operators-nm9n6\" (UID: \"4a2130ec-a436-40b9-bdd1-7548668b8f5d\") " pod="openshift-marketplace/redhat-operators-nm9n6" Dec 06 08:04:50 crc kubenswrapper[4823]: I1206 08:04:50.688452 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxb4s\" (UniqueName: \"kubernetes.io/projected/4a2130ec-a436-40b9-bdd1-7548668b8f5d-kube-api-access-rxb4s\") pod \"redhat-operators-nm9n6\" (UID: \"4a2130ec-a436-40b9-bdd1-7548668b8f5d\") " pod="openshift-marketplace/redhat-operators-nm9n6" Dec 06 08:04:50 crc kubenswrapper[4823]: I1206 08:04:50.688479 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a2130ec-a436-40b9-bdd1-7548668b8f5d-utilities\") pod \"redhat-operators-nm9n6\" (UID: \"4a2130ec-a436-40b9-bdd1-7548668b8f5d\") " pod="openshift-marketplace/redhat-operators-nm9n6" Dec 06 08:04:50 crc kubenswrapper[4823]: I1206 08:04:50.689035 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a2130ec-a436-40b9-bdd1-7548668b8f5d-catalog-content\") pod \"redhat-operators-nm9n6\" (UID: \"4a2130ec-a436-40b9-bdd1-7548668b8f5d\") " pod="openshift-marketplace/redhat-operators-nm9n6" Dec 06 08:04:50 crc kubenswrapper[4823]: I1206 08:04:50.689113 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a2130ec-a436-40b9-bdd1-7548668b8f5d-utilities\") pod \"redhat-operators-nm9n6\" (UID: \"4a2130ec-a436-40b9-bdd1-7548668b8f5d\") " pod="openshift-marketplace/redhat-operators-nm9n6" Dec 06 08:04:50 crc kubenswrapper[4823]: I1206 08:04:50.712051 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxb4s\" (UniqueName: \"kubernetes.io/projected/4a2130ec-a436-40b9-bdd1-7548668b8f5d-kube-api-access-rxb4s\") pod \"redhat-operators-nm9n6\" (UID: \"4a2130ec-a436-40b9-bdd1-7548668b8f5d\") " pod="openshift-marketplace/redhat-operators-nm9n6" Dec 06 08:04:50 crc kubenswrapper[4823]: I1206 08:04:50.849135 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nm9n6" Dec 06 08:04:51 crc kubenswrapper[4823]: I1206 08:04:51.498892 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nm9n6"] Dec 06 08:04:51 crc kubenswrapper[4823]: I1206 08:04:51.840791 4823 generic.go:334] "Generic (PLEG): container finished" podID="4a2130ec-a436-40b9-bdd1-7548668b8f5d" containerID="40e41d69a08a001835a3c3282a9c4ef6bdcea9f6184cb1f940dd722a22cf04a6" exitCode=0 Dec 06 08:04:51 crc kubenswrapper[4823]: I1206 08:04:51.840902 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nm9n6" event={"ID":"4a2130ec-a436-40b9-bdd1-7548668b8f5d","Type":"ContainerDied","Data":"40e41d69a08a001835a3c3282a9c4ef6bdcea9f6184cb1f940dd722a22cf04a6"} Dec 06 08:04:51 crc kubenswrapper[4823]: I1206 08:04:51.841155 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nm9n6" event={"ID":"4a2130ec-a436-40b9-bdd1-7548668b8f5d","Type":"ContainerStarted","Data":"4e907ad35e4f8f4d907a569724b209f427c990f5f0c82393f9dcb81f4e79ca81"} Dec 06 08:04:53 crc kubenswrapper[4823]: I1206 08:04:53.864323 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nm9n6" event={"ID":"4a2130ec-a436-40b9-bdd1-7548668b8f5d","Type":"ContainerStarted","Data":"0157665f3cf16dde68fb32aa5c2678fc8ae7ee24843678c05f8fbb3cabcc9d28"} Dec 06 08:04:56 crc kubenswrapper[4823]: I1206 08:04:56.894917 4823 generic.go:334] "Generic (PLEG): container finished" podID="4a2130ec-a436-40b9-bdd1-7548668b8f5d" containerID="0157665f3cf16dde68fb32aa5c2678fc8ae7ee24843678c05f8fbb3cabcc9d28" exitCode=0 Dec 06 08:04:56 crc kubenswrapper[4823]: I1206 08:04:56.894999 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nm9n6" event={"ID":"4a2130ec-a436-40b9-bdd1-7548668b8f5d","Type":"ContainerDied","Data":"0157665f3cf16dde68fb32aa5c2678fc8ae7ee24843678c05f8fbb3cabcc9d28"} Dec 06 08:05:00 crc kubenswrapper[4823]: I1206 08:05:00.141469 4823 scope.go:117] "RemoveContainer" containerID="8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" Dec 06 08:05:00 crc kubenswrapper[4823]: E1206 08:05:00.142397 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:05:01 crc kubenswrapper[4823]: I1206 08:05:01.954836 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nm9n6" event={"ID":"4a2130ec-a436-40b9-bdd1-7548668b8f5d","Type":"ContainerStarted","Data":"2923f00cce66eeda57d325d16307d4c685e242a51e4ede8007266e20d24501b1"} Dec 06 08:05:01 crc kubenswrapper[4823]: I1206 08:05:01.984945 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nm9n6" podStartSLOduration=2.973359469 podStartE2EDuration="11.984905444s" podCreationTimestamp="2025-12-06 08:04:50 +0000 UTC" firstStartedPulling="2025-12-06 08:04:51.844811673 +0000 UTC m=+5993.130563633" lastFinishedPulling="2025-12-06 08:05:00.856357648 +0000 UTC m=+6002.142109608" observedRunningTime="2025-12-06 08:05:01.980450136 +0000 UTC m=+6003.266202096" watchObservedRunningTime="2025-12-06 08:05:01.984905444 +0000 UTC m=+6003.270657404" Dec 06 08:05:02 crc kubenswrapper[4823]: I1206 08:05:02.415364 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6_bf6c7416-ec1a-4c0d-97ac-6f1a1c618788/util/0.log" Dec 06 08:05:02 crc kubenswrapper[4823]: I1206 08:05:02.692101 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6_bf6c7416-ec1a-4c0d-97ac-6f1a1c618788/util/0.log" Dec 06 08:05:02 crc kubenswrapper[4823]: I1206 08:05:02.885860 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6_bf6c7416-ec1a-4c0d-97ac-6f1a1c618788/pull/0.log" Dec 06 08:05:02 crc kubenswrapper[4823]: I1206 08:05:02.886017 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6_bf6c7416-ec1a-4c0d-97ac-6f1a1c618788/pull/0.log" Dec 06 08:05:02 crc kubenswrapper[4823]: I1206 08:05:02.936473 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6_bf6c7416-ec1a-4c0d-97ac-6f1a1c618788/pull/0.log" Dec 06 08:05:03 crc kubenswrapper[4823]: I1206 08:05:03.062879 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6_bf6c7416-ec1a-4c0d-97ac-6f1a1c618788/util/0.log" Dec 06 08:05:03 crc kubenswrapper[4823]: I1206 08:05:03.147644 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8r5b6_bf6c7416-ec1a-4c0d-97ac-6f1a1c618788/extract/0.log" Dec 06 08:05:03 crc kubenswrapper[4823]: I1206 08:05:03.224527 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v_3a07d62c-425a-451a-a937-aadc80058570/util/0.log" Dec 06 08:05:03 crc kubenswrapper[4823]: I1206 08:05:03.364470 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v_3a07d62c-425a-451a-a937-aadc80058570/util/0.log" Dec 06 08:05:03 crc kubenswrapper[4823]: I1206 08:05:03.403429 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v_3a07d62c-425a-451a-a937-aadc80058570/pull/0.log" Dec 06 08:05:03 crc kubenswrapper[4823]: I1206 08:05:03.403744 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v_3a07d62c-425a-451a-a937-aadc80058570/pull/0.log" Dec 06 08:05:03 crc kubenswrapper[4823]: I1206 08:05:03.587051 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v_3a07d62c-425a-451a-a937-aadc80058570/extract/0.log" Dec 06 08:05:03 crc kubenswrapper[4823]: I1206 08:05:03.610253 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v_3a07d62c-425a-451a-a937-aadc80058570/util/0.log" Dec 06 08:05:03 crc kubenswrapper[4823]: I1206 08:05:03.654874 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921026v5v_3a07d62c-425a-451a-a937-aadc80058570/pull/0.log" Dec 06 08:05:03 crc kubenswrapper[4823]: I1206 08:05:03.916591 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft_5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901/util/0.log" Dec 06 08:05:04 crc kubenswrapper[4823]: I1206 08:05:04.067620 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft_5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901/util/0.log" Dec 06 08:05:04 crc kubenswrapper[4823]: I1206 08:05:04.079247 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft_5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901/pull/0.log" Dec 06 08:05:04 crc kubenswrapper[4823]: I1206 08:05:04.088064 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft_5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901/pull/0.log" Dec 06 08:05:04 crc kubenswrapper[4823]: I1206 08:05:04.326757 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft_5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901/util/0.log" Dec 06 08:05:04 crc kubenswrapper[4823]: I1206 08:05:04.363040 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft_5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901/pull/0.log" Dec 06 08:05:04 crc kubenswrapper[4823]: I1206 08:05:04.363345 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83mglft_5f6e91ee-2aa4-4411-9eaa-eaa5f85c2901/extract/0.log" Dec 06 08:05:04 crc kubenswrapper[4823]: I1206 08:05:04.525638 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kzxl6_584a4234-6095-4bab-9af7-3ae474ac27e6/extract-utilities/0.log" Dec 06 08:05:04 crc kubenswrapper[4823]: I1206 08:05:04.730988 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kzxl6_584a4234-6095-4bab-9af7-3ae474ac27e6/extract-utilities/0.log" Dec 06 08:05:04 crc kubenswrapper[4823]: I1206 08:05:04.736285 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kzxl6_584a4234-6095-4bab-9af7-3ae474ac27e6/extract-content/0.log" Dec 06 08:05:04 crc kubenswrapper[4823]: I1206 08:05:04.759364 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kzxl6_584a4234-6095-4bab-9af7-3ae474ac27e6/extract-content/0.log" Dec 06 08:05:05 crc kubenswrapper[4823]: I1206 08:05:05.010313 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kzxl6_584a4234-6095-4bab-9af7-3ae474ac27e6/extract-utilities/0.log" Dec 06 08:05:05 crc kubenswrapper[4823]: I1206 08:05:05.021319 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kzxl6_584a4234-6095-4bab-9af7-3ae474ac27e6/extract-content/0.log" Dec 06 08:05:05 crc kubenswrapper[4823]: I1206 08:05:05.313975 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5tnvt_8aa2000e-0a46-4d7c-b13a-4ae913db9b28/extract-utilities/0.log" Dec 06 08:05:05 crc kubenswrapper[4823]: I1206 08:05:05.522595 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5tnvt_8aa2000e-0a46-4d7c-b13a-4ae913db9b28/extract-utilities/0.log" Dec 06 08:05:05 crc kubenswrapper[4823]: I1206 08:05:05.606377 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5tnvt_8aa2000e-0a46-4d7c-b13a-4ae913db9b28/extract-content/0.log" Dec 06 08:05:05 crc kubenswrapper[4823]: I1206 08:05:05.612748 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5tnvt_8aa2000e-0a46-4d7c-b13a-4ae913db9b28/extract-content/0.log" Dec 06 08:05:05 crc kubenswrapper[4823]: I1206 08:05:05.853950 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5tnvt_8aa2000e-0a46-4d7c-b13a-4ae913db9b28/extract-utilities/0.log" Dec 06 08:05:05 crc kubenswrapper[4823]: I1206 08:05:05.859282 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kzxl6_584a4234-6095-4bab-9af7-3ae474ac27e6/registry-server/0.log" Dec 06 08:05:05 crc kubenswrapper[4823]: I1206 08:05:05.906857 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5tnvt_8aa2000e-0a46-4d7c-b13a-4ae913db9b28/extract-content/0.log" Dec 06 08:05:06 crc kubenswrapper[4823]: I1206 08:05:06.092078 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-969f9_c529f398-1c3e-4a7c-a46f-d57d2f588b9c/marketplace-operator/0.log" Dec 06 08:05:06 crc kubenswrapper[4823]: I1206 08:05:06.200004 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5tnvt_8aa2000e-0a46-4d7c-b13a-4ae913db9b28/registry-server/0.log" Dec 06 08:05:06 crc kubenswrapper[4823]: I1206 08:05:06.348222 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jfh2s_3e447c74-d5a1-433a-bbb8-faa526e58597/extract-utilities/0.log" Dec 06 08:05:06 crc kubenswrapper[4823]: I1206 08:05:06.562305 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jfh2s_3e447c74-d5a1-433a-bbb8-faa526e58597/extract-utilities/0.log" Dec 06 08:05:06 crc kubenswrapper[4823]: I1206 08:05:06.615703 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jfh2s_3e447c74-d5a1-433a-bbb8-faa526e58597/extract-content/0.log" Dec 06 08:05:06 crc kubenswrapper[4823]: I1206 08:05:06.615713 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jfh2s_3e447c74-d5a1-433a-bbb8-faa526e58597/extract-content/0.log" Dec 06 08:05:06 crc kubenswrapper[4823]: I1206 08:05:06.802162 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jfh2s_3e447c74-d5a1-433a-bbb8-faa526e58597/extract-content/0.log" Dec 06 08:05:06 crc kubenswrapper[4823]: I1206 08:05:06.813385 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jfh2s_3e447c74-d5a1-433a-bbb8-faa526e58597/extract-utilities/0.log" Dec 06 08:05:06 crc kubenswrapper[4823]: I1206 08:05:06.851844 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nm9n6_4a2130ec-a436-40b9-bdd1-7548668b8f5d/extract-utilities/0.log" Dec 06 08:05:07 crc kubenswrapper[4823]: I1206 08:05:07.056858 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jfh2s_3e447c74-d5a1-433a-bbb8-faa526e58597/registry-server/0.log" Dec 06 08:05:07 crc kubenswrapper[4823]: I1206 08:05:07.099169 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nm9n6_4a2130ec-a436-40b9-bdd1-7548668b8f5d/extract-utilities/0.log" Dec 06 08:05:07 crc kubenswrapper[4823]: I1206 08:05:07.178915 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nm9n6_4a2130ec-a436-40b9-bdd1-7548668b8f5d/extract-content/0.log" Dec 06 08:05:07 crc kubenswrapper[4823]: I1206 08:05:07.192713 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nm9n6_4a2130ec-a436-40b9-bdd1-7548668b8f5d/extract-content/0.log" Dec 06 08:05:07 crc kubenswrapper[4823]: I1206 08:05:07.374460 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nm9n6_4a2130ec-a436-40b9-bdd1-7548668b8f5d/extract-utilities/0.log" Dec 06 08:05:07 crc kubenswrapper[4823]: I1206 08:05:07.400794 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nm9n6_4a2130ec-a436-40b9-bdd1-7548668b8f5d/registry-server/0.log" Dec 06 08:05:07 crc kubenswrapper[4823]: I1206 08:05:07.413346 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nm9n6_4a2130ec-a436-40b9-bdd1-7548668b8f5d/extract-content/0.log" Dec 06 08:05:07 crc kubenswrapper[4823]: I1206 08:05:07.435456 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v426c_f25daf46-c19d-4f30-b638-f1d1ffb22e99/extract-utilities/0.log" Dec 06 08:05:07 crc kubenswrapper[4823]: I1206 08:05:07.590822 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v426c_f25daf46-c19d-4f30-b638-f1d1ffb22e99/extract-content/0.log" Dec 06 08:05:07 crc kubenswrapper[4823]: I1206 08:05:07.616745 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v426c_f25daf46-c19d-4f30-b638-f1d1ffb22e99/extract-utilities/0.log" Dec 06 08:05:07 crc kubenswrapper[4823]: I1206 08:05:07.633252 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v426c_f25daf46-c19d-4f30-b638-f1d1ffb22e99/extract-content/0.log" Dec 06 08:05:07 crc kubenswrapper[4823]: I1206 08:05:07.813268 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v426c_f25daf46-c19d-4f30-b638-f1d1ffb22e99/extract-utilities/0.log" Dec 06 08:05:07 crc kubenswrapper[4823]: I1206 08:05:07.827479 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v426c_f25daf46-c19d-4f30-b638-f1d1ffb22e99/extract-content/0.log" Dec 06 08:05:08 crc kubenswrapper[4823]: I1206 08:05:08.978183 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v426c_f25daf46-c19d-4f30-b638-f1d1ffb22e99/registry-server/0.log" Dec 06 08:05:10 crc kubenswrapper[4823]: I1206 08:05:10.849648 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nm9n6" Dec 06 08:05:10 crc kubenswrapper[4823]: I1206 08:05:10.850078 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nm9n6" Dec 06 08:05:10 crc kubenswrapper[4823]: I1206 08:05:10.901454 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nm9n6" Dec 06 08:05:11 crc kubenswrapper[4823]: I1206 08:05:11.090400 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nm9n6" Dec 06 08:05:11 crc kubenswrapper[4823]: I1206 08:05:11.196605 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nm9n6"] Dec 06 08:05:12 crc kubenswrapper[4823]: I1206 08:05:12.149305 4823 scope.go:117] "RemoveContainer" containerID="8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" Dec 06 08:05:12 crc kubenswrapper[4823]: E1206 08:05:12.149927 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:05:13 crc kubenswrapper[4823]: I1206 08:05:13.094428 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nm9n6" podUID="4a2130ec-a436-40b9-bdd1-7548668b8f5d" containerName="registry-server" containerID="cri-o://2923f00cce66eeda57d325d16307d4c685e242a51e4ede8007266e20d24501b1" gracePeriod=2 Dec 06 08:05:13 crc kubenswrapper[4823]: I1206 08:05:13.617254 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nm9n6" Dec 06 08:05:13 crc kubenswrapper[4823]: I1206 08:05:13.653479 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a2130ec-a436-40b9-bdd1-7548668b8f5d-catalog-content\") pod \"4a2130ec-a436-40b9-bdd1-7548668b8f5d\" (UID: \"4a2130ec-a436-40b9-bdd1-7548668b8f5d\") " Dec 06 08:05:13 crc kubenswrapper[4823]: I1206 08:05:13.653754 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a2130ec-a436-40b9-bdd1-7548668b8f5d-utilities\") pod \"4a2130ec-a436-40b9-bdd1-7548668b8f5d\" (UID: \"4a2130ec-a436-40b9-bdd1-7548668b8f5d\") " Dec 06 08:05:13 crc kubenswrapper[4823]: I1206 08:05:13.653894 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxb4s\" (UniqueName: \"kubernetes.io/projected/4a2130ec-a436-40b9-bdd1-7548668b8f5d-kube-api-access-rxb4s\") pod \"4a2130ec-a436-40b9-bdd1-7548668b8f5d\" (UID: \"4a2130ec-a436-40b9-bdd1-7548668b8f5d\") " Dec 06 08:05:13 crc kubenswrapper[4823]: I1206 08:05:13.654705 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a2130ec-a436-40b9-bdd1-7548668b8f5d-utilities" (OuterVolumeSpecName: "utilities") pod "4a2130ec-a436-40b9-bdd1-7548668b8f5d" (UID: "4a2130ec-a436-40b9-bdd1-7548668b8f5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:05:13 crc kubenswrapper[4823]: I1206 08:05:13.668565 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a2130ec-a436-40b9-bdd1-7548668b8f5d-kube-api-access-rxb4s" (OuterVolumeSpecName: "kube-api-access-rxb4s") pod "4a2130ec-a436-40b9-bdd1-7548668b8f5d" (UID: "4a2130ec-a436-40b9-bdd1-7548668b8f5d"). InnerVolumeSpecName "kube-api-access-rxb4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:05:13 crc kubenswrapper[4823]: I1206 08:05:13.756633 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a2130ec-a436-40b9-bdd1-7548668b8f5d-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 08:05:13 crc kubenswrapper[4823]: I1206 08:05:13.756699 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxb4s\" (UniqueName: \"kubernetes.io/projected/4a2130ec-a436-40b9-bdd1-7548668b8f5d-kube-api-access-rxb4s\") on node \"crc\" DevicePath \"\"" Dec 06 08:05:13 crc kubenswrapper[4823]: I1206 08:05:13.783545 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a2130ec-a436-40b9-bdd1-7548668b8f5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a2130ec-a436-40b9-bdd1-7548668b8f5d" (UID: "4a2130ec-a436-40b9-bdd1-7548668b8f5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:05:13 crc kubenswrapper[4823]: I1206 08:05:13.858493 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a2130ec-a436-40b9-bdd1-7548668b8f5d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 08:05:14 crc kubenswrapper[4823]: I1206 08:05:14.106845 4823 generic.go:334] "Generic (PLEG): container finished" podID="4a2130ec-a436-40b9-bdd1-7548668b8f5d" containerID="2923f00cce66eeda57d325d16307d4c685e242a51e4ede8007266e20d24501b1" exitCode=0 Dec 06 08:05:14 crc kubenswrapper[4823]: I1206 08:05:14.106922 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nm9n6" event={"ID":"4a2130ec-a436-40b9-bdd1-7548668b8f5d","Type":"ContainerDied","Data":"2923f00cce66eeda57d325d16307d4c685e242a51e4ede8007266e20d24501b1"} Dec 06 08:05:14 crc kubenswrapper[4823]: I1206 08:05:14.107002 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nm9n6" event={"ID":"4a2130ec-a436-40b9-bdd1-7548668b8f5d","Type":"ContainerDied","Data":"4e907ad35e4f8f4d907a569724b209f427c990f5f0c82393f9dcb81f4e79ca81"} Dec 06 08:05:14 crc kubenswrapper[4823]: I1206 08:05:14.107032 4823 scope.go:117] "RemoveContainer" containerID="2923f00cce66eeda57d325d16307d4c685e242a51e4ede8007266e20d24501b1" Dec 06 08:05:14 crc kubenswrapper[4823]: I1206 08:05:14.107391 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nm9n6" Dec 06 08:05:14 crc kubenswrapper[4823]: I1206 08:05:14.127197 4823 scope.go:117] "RemoveContainer" containerID="0157665f3cf16dde68fb32aa5c2678fc8ae7ee24843678c05f8fbb3cabcc9d28" Dec 06 08:05:14 crc kubenswrapper[4823]: I1206 08:05:14.158519 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nm9n6"] Dec 06 08:05:14 crc kubenswrapper[4823]: I1206 08:05:14.168616 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nm9n6"] Dec 06 08:05:14 crc kubenswrapper[4823]: I1206 08:05:14.170693 4823 scope.go:117] "RemoveContainer" containerID="40e41d69a08a001835a3c3282a9c4ef6bdcea9f6184cb1f940dd722a22cf04a6" Dec 06 08:05:14 crc kubenswrapper[4823]: I1206 08:05:14.215802 4823 scope.go:117] "RemoveContainer" containerID="2923f00cce66eeda57d325d16307d4c685e242a51e4ede8007266e20d24501b1" Dec 06 08:05:14 crc kubenswrapper[4823]: E1206 08:05:14.216526 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2923f00cce66eeda57d325d16307d4c685e242a51e4ede8007266e20d24501b1\": container with ID starting with 2923f00cce66eeda57d325d16307d4c685e242a51e4ede8007266e20d24501b1 not found: ID does not exist" containerID="2923f00cce66eeda57d325d16307d4c685e242a51e4ede8007266e20d24501b1" Dec 06 08:05:14 crc kubenswrapper[4823]: I1206 08:05:14.216581 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2923f00cce66eeda57d325d16307d4c685e242a51e4ede8007266e20d24501b1"} err="failed to get container status \"2923f00cce66eeda57d325d16307d4c685e242a51e4ede8007266e20d24501b1\": rpc error: code = NotFound desc = could not find container \"2923f00cce66eeda57d325d16307d4c685e242a51e4ede8007266e20d24501b1\": container with ID starting with 2923f00cce66eeda57d325d16307d4c685e242a51e4ede8007266e20d24501b1 not found: ID does not exist" Dec 06 08:05:14 crc kubenswrapper[4823]: I1206 08:05:14.216608 4823 scope.go:117] "RemoveContainer" containerID="0157665f3cf16dde68fb32aa5c2678fc8ae7ee24843678c05f8fbb3cabcc9d28" Dec 06 08:05:14 crc kubenswrapper[4823]: E1206 08:05:14.217160 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0157665f3cf16dde68fb32aa5c2678fc8ae7ee24843678c05f8fbb3cabcc9d28\": container with ID starting with 0157665f3cf16dde68fb32aa5c2678fc8ae7ee24843678c05f8fbb3cabcc9d28 not found: ID does not exist" containerID="0157665f3cf16dde68fb32aa5c2678fc8ae7ee24843678c05f8fbb3cabcc9d28" Dec 06 08:05:14 crc kubenswrapper[4823]: I1206 08:05:14.217295 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0157665f3cf16dde68fb32aa5c2678fc8ae7ee24843678c05f8fbb3cabcc9d28"} err="failed to get container status \"0157665f3cf16dde68fb32aa5c2678fc8ae7ee24843678c05f8fbb3cabcc9d28\": rpc error: code = NotFound desc = could not find container \"0157665f3cf16dde68fb32aa5c2678fc8ae7ee24843678c05f8fbb3cabcc9d28\": container with ID starting with 0157665f3cf16dde68fb32aa5c2678fc8ae7ee24843678c05f8fbb3cabcc9d28 not found: ID does not exist" Dec 06 08:05:14 crc kubenswrapper[4823]: I1206 08:05:14.217391 4823 scope.go:117] "RemoveContainer" containerID="40e41d69a08a001835a3c3282a9c4ef6bdcea9f6184cb1f940dd722a22cf04a6" Dec 06 08:05:14 crc kubenswrapper[4823]: E1206 08:05:14.217842 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40e41d69a08a001835a3c3282a9c4ef6bdcea9f6184cb1f940dd722a22cf04a6\": container with ID starting with 40e41d69a08a001835a3c3282a9c4ef6bdcea9f6184cb1f940dd722a22cf04a6 not found: ID does not exist" containerID="40e41d69a08a001835a3c3282a9c4ef6bdcea9f6184cb1f940dd722a22cf04a6" Dec 06 08:05:14 crc kubenswrapper[4823]: I1206 08:05:14.217882 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40e41d69a08a001835a3c3282a9c4ef6bdcea9f6184cb1f940dd722a22cf04a6"} err="failed to get container status \"40e41d69a08a001835a3c3282a9c4ef6bdcea9f6184cb1f940dd722a22cf04a6\": rpc error: code = NotFound desc = could not find container \"40e41d69a08a001835a3c3282a9c4ef6bdcea9f6184cb1f940dd722a22cf04a6\": container with ID starting with 40e41d69a08a001835a3c3282a9c4ef6bdcea9f6184cb1f940dd722a22cf04a6 not found: ID does not exist" Dec 06 08:05:15 crc kubenswrapper[4823]: I1206 08:05:15.153938 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a2130ec-a436-40b9-bdd1-7548668b8f5d" path="/var/lib/kubelet/pods/4a2130ec-a436-40b9-bdd1-7548668b8f5d/volumes" Dec 06 08:05:20 crc kubenswrapper[4823]: I1206 08:05:20.000792 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-74wlw_f8c8c4c4-eace-4fdb-bad2-2f0cf082c61c/prometheus-operator/0.log" Dec 06 08:05:20 crc kubenswrapper[4823]: I1206 08:05:20.109744 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f98d949bf-gg2gq_f907eb32-7551-4d4a-b365-cbaa043890b1/prometheus-operator-admission-webhook/0.log" Dec 06 08:05:20 crc kubenswrapper[4823]: I1206 08:05:20.176881 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f98d949bf-lksl9_fe5f932d-2587-431d-87ff-0c02b2270c11/prometheus-operator-admission-webhook/0.log" Dec 06 08:05:20 crc kubenswrapper[4823]: I1206 08:05:20.341757 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-6r9cv_6bb10a2a-8118-4c1f-bc30-d680071b8992/operator/0.log" Dec 06 08:05:20 crc kubenswrapper[4823]: I1206 08:05:20.425070 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-q2gc2_66fe6642-822b-4700-b97c-48ef71676514/perses-operator/0.log" Dec 06 08:05:27 crc kubenswrapper[4823]: I1206 08:05:27.145612 4823 scope.go:117] "RemoveContainer" containerID="8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" Dec 06 08:05:27 crc kubenswrapper[4823]: E1206 08:05:27.146573 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:05:39 crc kubenswrapper[4823]: I1206 08:05:39.149815 4823 scope.go:117] "RemoveContainer" containerID="8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" Dec 06 08:05:39 crc kubenswrapper[4823]: I1206 08:05:39.400223 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerStarted","Data":"7c5c3c9d350e5f051586d17e14d2e898ac1f9c5170d0c00982511584718ac52e"} Dec 06 08:07:20 crc kubenswrapper[4823]: I1206 08:07:20.518130 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-476gx"] Dec 06 08:07:20 crc kubenswrapper[4823]: E1206 08:07:20.519306 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a2130ec-a436-40b9-bdd1-7548668b8f5d" containerName="registry-server" Dec 06 08:07:20 crc kubenswrapper[4823]: I1206 08:07:20.519325 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a2130ec-a436-40b9-bdd1-7548668b8f5d" containerName="registry-server" Dec 06 08:07:20 crc kubenswrapper[4823]: E1206 08:07:20.519419 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a2130ec-a436-40b9-bdd1-7548668b8f5d" containerName="extract-content" Dec 06 08:07:20 crc kubenswrapper[4823]: I1206 08:07:20.519428 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a2130ec-a436-40b9-bdd1-7548668b8f5d" containerName="extract-content" Dec 06 08:07:20 crc kubenswrapper[4823]: E1206 08:07:20.519446 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a2130ec-a436-40b9-bdd1-7548668b8f5d" containerName="extract-utilities" Dec 06 08:07:20 crc kubenswrapper[4823]: I1206 08:07:20.519456 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a2130ec-a436-40b9-bdd1-7548668b8f5d" containerName="extract-utilities" Dec 06 08:07:20 crc kubenswrapper[4823]: I1206 08:07:20.519706 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a2130ec-a436-40b9-bdd1-7548668b8f5d" containerName="registry-server" Dec 06 08:07:20 crc kubenswrapper[4823]: I1206 08:07:20.521525 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-476gx" Dec 06 08:07:20 crc kubenswrapper[4823]: I1206 08:07:20.527639 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-476gx"] Dec 06 08:07:20 crc kubenswrapper[4823]: I1206 08:07:20.633093 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9xzs\" (UniqueName: \"kubernetes.io/projected/3d8ff444-56b3-410b-adf2-d69302dcdb96-kube-api-access-r9xzs\") pod \"redhat-marketplace-476gx\" (UID: \"3d8ff444-56b3-410b-adf2-d69302dcdb96\") " pod="openshift-marketplace/redhat-marketplace-476gx" Dec 06 08:07:20 crc kubenswrapper[4823]: I1206 08:07:20.633166 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d8ff444-56b3-410b-adf2-d69302dcdb96-catalog-content\") pod \"redhat-marketplace-476gx\" (UID: \"3d8ff444-56b3-410b-adf2-d69302dcdb96\") " pod="openshift-marketplace/redhat-marketplace-476gx" Dec 06 08:07:20 crc kubenswrapper[4823]: I1206 08:07:20.633622 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d8ff444-56b3-410b-adf2-d69302dcdb96-utilities\") pod \"redhat-marketplace-476gx\" (UID: \"3d8ff444-56b3-410b-adf2-d69302dcdb96\") " pod="openshift-marketplace/redhat-marketplace-476gx" Dec 06 08:07:20 crc kubenswrapper[4823]: I1206 08:07:20.735144 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9xzs\" (UniqueName: \"kubernetes.io/projected/3d8ff444-56b3-410b-adf2-d69302dcdb96-kube-api-access-r9xzs\") pod \"redhat-marketplace-476gx\" (UID: \"3d8ff444-56b3-410b-adf2-d69302dcdb96\") " pod="openshift-marketplace/redhat-marketplace-476gx" Dec 06 08:07:20 crc kubenswrapper[4823]: I1206 08:07:20.735219 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d8ff444-56b3-410b-adf2-d69302dcdb96-catalog-content\") pod \"redhat-marketplace-476gx\" (UID: \"3d8ff444-56b3-410b-adf2-d69302dcdb96\") " pod="openshift-marketplace/redhat-marketplace-476gx" Dec 06 08:07:20 crc kubenswrapper[4823]: I1206 08:07:20.735310 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d8ff444-56b3-410b-adf2-d69302dcdb96-utilities\") pod \"redhat-marketplace-476gx\" (UID: \"3d8ff444-56b3-410b-adf2-d69302dcdb96\") " pod="openshift-marketplace/redhat-marketplace-476gx" Dec 06 08:07:20 crc kubenswrapper[4823]: I1206 08:07:20.735799 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d8ff444-56b3-410b-adf2-d69302dcdb96-utilities\") pod \"redhat-marketplace-476gx\" (UID: \"3d8ff444-56b3-410b-adf2-d69302dcdb96\") " pod="openshift-marketplace/redhat-marketplace-476gx" Dec 06 08:07:20 crc kubenswrapper[4823]: I1206 08:07:20.735931 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d8ff444-56b3-410b-adf2-d69302dcdb96-catalog-content\") pod \"redhat-marketplace-476gx\" (UID: \"3d8ff444-56b3-410b-adf2-d69302dcdb96\") " pod="openshift-marketplace/redhat-marketplace-476gx" Dec 06 08:07:20 crc kubenswrapper[4823]: I1206 08:07:20.761779 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9xzs\" (UniqueName: \"kubernetes.io/projected/3d8ff444-56b3-410b-adf2-d69302dcdb96-kube-api-access-r9xzs\") pod \"redhat-marketplace-476gx\" (UID: \"3d8ff444-56b3-410b-adf2-d69302dcdb96\") " pod="openshift-marketplace/redhat-marketplace-476gx" Dec 06 08:07:20 crc kubenswrapper[4823]: I1206 08:07:20.849974 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-476gx" Dec 06 08:07:21 crc kubenswrapper[4823]: I1206 08:07:21.438338 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-476gx"] Dec 06 08:07:22 crc kubenswrapper[4823]: I1206 08:07:22.457295 4823 generic.go:334] "Generic (PLEG): container finished" podID="3d8ff444-56b3-410b-adf2-d69302dcdb96" containerID="05e05d450a8d195dd064dd722b3a48c5651ed2e8e54c7c7ef4f2bdd03a516296" exitCode=0 Dec 06 08:07:22 crc kubenswrapper[4823]: I1206 08:07:22.457373 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-476gx" event={"ID":"3d8ff444-56b3-410b-adf2-d69302dcdb96","Type":"ContainerDied","Data":"05e05d450a8d195dd064dd722b3a48c5651ed2e8e54c7c7ef4f2bdd03a516296"} Dec 06 08:07:22 crc kubenswrapper[4823]: I1206 08:07:22.457831 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-476gx" event={"ID":"3d8ff444-56b3-410b-adf2-d69302dcdb96","Type":"ContainerStarted","Data":"c24692797557bfd821f01a3ae02ac7e63b974a909881955b1f756f3c14e32430"} Dec 06 08:07:22 crc kubenswrapper[4823]: I1206 08:07:22.459530 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 08:07:23 crc kubenswrapper[4823]: I1206 08:07:23.299583 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xd9rp"] Dec 06 08:07:23 crc kubenswrapper[4823]: I1206 08:07:23.302005 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xd9rp" Dec 06 08:07:23 crc kubenswrapper[4823]: I1206 08:07:23.317822 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xd9rp"] Dec 06 08:07:23 crc kubenswrapper[4823]: I1206 08:07:23.387150 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fdee1f4-8a2b-49be-bf86-47a92260a830-utilities\") pod \"certified-operators-xd9rp\" (UID: \"5fdee1f4-8a2b-49be-bf86-47a92260a830\") " pod="openshift-marketplace/certified-operators-xd9rp" Dec 06 08:07:23 crc kubenswrapper[4823]: I1206 08:07:23.387313 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4gz2\" (UniqueName: \"kubernetes.io/projected/5fdee1f4-8a2b-49be-bf86-47a92260a830-kube-api-access-n4gz2\") pod \"certified-operators-xd9rp\" (UID: \"5fdee1f4-8a2b-49be-bf86-47a92260a830\") " pod="openshift-marketplace/certified-operators-xd9rp" Dec 06 08:07:23 crc kubenswrapper[4823]: I1206 08:07:23.387394 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fdee1f4-8a2b-49be-bf86-47a92260a830-catalog-content\") pod \"certified-operators-xd9rp\" (UID: \"5fdee1f4-8a2b-49be-bf86-47a92260a830\") " pod="openshift-marketplace/certified-operators-xd9rp" Dec 06 08:07:23 crc kubenswrapper[4823]: I1206 08:07:23.489289 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fdee1f4-8a2b-49be-bf86-47a92260a830-utilities\") pod \"certified-operators-xd9rp\" (UID: \"5fdee1f4-8a2b-49be-bf86-47a92260a830\") " pod="openshift-marketplace/certified-operators-xd9rp" Dec 06 08:07:23 crc kubenswrapper[4823]: I1206 08:07:23.489681 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4gz2\" (UniqueName: \"kubernetes.io/projected/5fdee1f4-8a2b-49be-bf86-47a92260a830-kube-api-access-n4gz2\") pod \"certified-operators-xd9rp\" (UID: \"5fdee1f4-8a2b-49be-bf86-47a92260a830\") " pod="openshift-marketplace/certified-operators-xd9rp" Dec 06 08:07:23 crc kubenswrapper[4823]: I1206 08:07:23.489770 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fdee1f4-8a2b-49be-bf86-47a92260a830-catalog-content\") pod \"certified-operators-xd9rp\" (UID: \"5fdee1f4-8a2b-49be-bf86-47a92260a830\") " pod="openshift-marketplace/certified-operators-xd9rp" Dec 06 08:07:23 crc kubenswrapper[4823]: I1206 08:07:23.489876 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fdee1f4-8a2b-49be-bf86-47a92260a830-utilities\") pod \"certified-operators-xd9rp\" (UID: \"5fdee1f4-8a2b-49be-bf86-47a92260a830\") " pod="openshift-marketplace/certified-operators-xd9rp" Dec 06 08:07:23 crc kubenswrapper[4823]: I1206 08:07:23.490229 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fdee1f4-8a2b-49be-bf86-47a92260a830-catalog-content\") pod \"certified-operators-xd9rp\" (UID: \"5fdee1f4-8a2b-49be-bf86-47a92260a830\") " pod="openshift-marketplace/certified-operators-xd9rp" Dec 06 08:07:23 crc kubenswrapper[4823]: I1206 08:07:23.508503 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4gz2\" (UniqueName: \"kubernetes.io/projected/5fdee1f4-8a2b-49be-bf86-47a92260a830-kube-api-access-n4gz2\") pod \"certified-operators-xd9rp\" (UID: \"5fdee1f4-8a2b-49be-bf86-47a92260a830\") " pod="openshift-marketplace/certified-operators-xd9rp" Dec 06 08:07:23 crc kubenswrapper[4823]: I1206 08:07:23.623514 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xd9rp" Dec 06 08:07:24 crc kubenswrapper[4823]: E1206 08:07:24.209849 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d8ff444_56b3_410b_adf2_d69302dcdb96.slice/crio-conmon-1b3827de0dcac15c4f42667a837ca218b41460dcf6fda67551e0b24d09aef37f.scope\": RecentStats: unable to find data in memory cache]" Dec 06 08:07:24 crc kubenswrapper[4823]: I1206 08:07:24.253713 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xd9rp"] Dec 06 08:07:24 crc kubenswrapper[4823]: I1206 08:07:24.479517 4823 generic.go:334] "Generic (PLEG): container finished" podID="3d8ff444-56b3-410b-adf2-d69302dcdb96" containerID="1b3827de0dcac15c4f42667a837ca218b41460dcf6fda67551e0b24d09aef37f" exitCode=0 Dec 06 08:07:24 crc kubenswrapper[4823]: I1206 08:07:24.479605 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-476gx" event={"ID":"3d8ff444-56b3-410b-adf2-d69302dcdb96","Type":"ContainerDied","Data":"1b3827de0dcac15c4f42667a837ca218b41460dcf6fda67551e0b24d09aef37f"} Dec 06 08:07:24 crc kubenswrapper[4823]: I1206 08:07:24.482710 4823 generic.go:334] "Generic (PLEG): container finished" podID="5fdee1f4-8a2b-49be-bf86-47a92260a830" containerID="f5aecb8efa18648fe7c6e811e55df8ffec213a7f0b1428682d430f6945faaaed" exitCode=0 Dec 06 08:07:24 crc kubenswrapper[4823]: I1206 08:07:24.482751 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd9rp" event={"ID":"5fdee1f4-8a2b-49be-bf86-47a92260a830","Type":"ContainerDied","Data":"f5aecb8efa18648fe7c6e811e55df8ffec213a7f0b1428682d430f6945faaaed"} Dec 06 08:07:24 crc kubenswrapper[4823]: I1206 08:07:24.482774 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd9rp" event={"ID":"5fdee1f4-8a2b-49be-bf86-47a92260a830","Type":"ContainerStarted","Data":"b570ddf59490ba50c6e4bead2ec1348980c708f2521a971a2dff98e425644371"} Dec 06 08:07:25 crc kubenswrapper[4823]: I1206 08:07:25.494554 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-476gx" event={"ID":"3d8ff444-56b3-410b-adf2-d69302dcdb96","Type":"ContainerStarted","Data":"74cc7104d22b3f194a0db15ec9c24f71b15db3406ad114012c7c91c7a267e087"} Dec 06 08:07:25 crc kubenswrapper[4823]: I1206 08:07:25.496629 4823 generic.go:334] "Generic (PLEG): container finished" podID="68e86d3f-c3d7-451b-9c68-d318eb241e87" containerID="4653bd470364978983bfe1277e3ad4df54423bfa182e4be99d0e0162a7f2662f" exitCode=0 Dec 06 08:07:25 crc kubenswrapper[4823]: I1206 08:07:25.496716 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jmqgf/must-gather-zwlf6" event={"ID":"68e86d3f-c3d7-451b-9c68-d318eb241e87","Type":"ContainerDied","Data":"4653bd470364978983bfe1277e3ad4df54423bfa182e4be99d0e0162a7f2662f"} Dec 06 08:07:25 crc kubenswrapper[4823]: I1206 08:07:25.497040 4823 scope.go:117] "RemoveContainer" containerID="4653bd470364978983bfe1277e3ad4df54423bfa182e4be99d0e0162a7f2662f" Dec 06 08:07:25 crc kubenswrapper[4823]: I1206 08:07:25.533729 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-476gx" podStartSLOduration=2.813389818 podStartE2EDuration="5.533704798s" podCreationTimestamp="2025-12-06 08:07:20 +0000 UTC" firstStartedPulling="2025-12-06 08:07:22.459340724 +0000 UTC m=+6143.745092684" lastFinishedPulling="2025-12-06 08:07:25.179655704 +0000 UTC m=+6146.465407664" observedRunningTime="2025-12-06 08:07:25.513513717 +0000 UTC m=+6146.799265677" watchObservedRunningTime="2025-12-06 08:07:25.533704798 +0000 UTC m=+6146.819456768" Dec 06 08:07:26 crc kubenswrapper[4823]: I1206 08:07:26.221272 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jmqgf_must-gather-zwlf6_68e86d3f-c3d7-451b-9c68-d318eb241e87/gather/0.log" Dec 06 08:07:26 crc kubenswrapper[4823]: I1206 08:07:26.514451 4823 generic.go:334] "Generic (PLEG): container finished" podID="5fdee1f4-8a2b-49be-bf86-47a92260a830" containerID="d4a1eca6c32ece6b122a78ff78ea932eecf53ad613057cf92e760392011bd4ba" exitCode=0 Dec 06 08:07:26 crc kubenswrapper[4823]: I1206 08:07:26.514914 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd9rp" event={"ID":"5fdee1f4-8a2b-49be-bf86-47a92260a830","Type":"ContainerDied","Data":"d4a1eca6c32ece6b122a78ff78ea932eecf53ad613057cf92e760392011bd4ba"} Dec 06 08:07:27 crc kubenswrapper[4823]: I1206 08:07:27.528509 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd9rp" event={"ID":"5fdee1f4-8a2b-49be-bf86-47a92260a830","Type":"ContainerStarted","Data":"408be40e2d8c6de119e33229d0bdc79c8d9df0107dc6ddce419982890acc4d4e"} Dec 06 08:07:27 crc kubenswrapper[4823]: I1206 08:07:27.558832 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xd9rp" podStartSLOduration=2.118889404 podStartE2EDuration="4.55881014s" podCreationTimestamp="2025-12-06 08:07:23 +0000 UTC" firstStartedPulling="2025-12-06 08:07:24.483845478 +0000 UTC m=+6145.769597438" lastFinishedPulling="2025-12-06 08:07:26.923766214 +0000 UTC m=+6148.209518174" observedRunningTime="2025-12-06 08:07:27.550779349 +0000 UTC m=+6148.836531339" watchObservedRunningTime="2025-12-06 08:07:27.55881014 +0000 UTC m=+6148.844562110" Dec 06 08:07:30 crc kubenswrapper[4823]: I1206 08:07:30.851428 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-476gx" Dec 06 08:07:30 crc kubenswrapper[4823]: I1206 08:07:30.852938 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-476gx" Dec 06 08:07:30 crc kubenswrapper[4823]: I1206 08:07:30.907521 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-476gx" Dec 06 08:07:31 crc kubenswrapper[4823]: I1206 08:07:31.608031 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-476gx" Dec 06 08:07:32 crc kubenswrapper[4823]: I1206 08:07:32.092909 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-476gx"] Dec 06 08:07:33 crc kubenswrapper[4823]: I1206 08:07:33.577424 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-476gx" podUID="3d8ff444-56b3-410b-adf2-d69302dcdb96" containerName="registry-server" containerID="cri-o://74cc7104d22b3f194a0db15ec9c24f71b15db3406ad114012c7c91c7a267e087" gracePeriod=2 Dec 06 08:07:33 crc kubenswrapper[4823]: I1206 08:07:33.625160 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xd9rp" Dec 06 08:07:33 crc kubenswrapper[4823]: I1206 08:07:33.625207 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xd9rp" Dec 06 08:07:33 crc kubenswrapper[4823]: I1206 08:07:33.680939 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xd9rp" Dec 06 08:07:34 crc kubenswrapper[4823]: I1206 08:07:34.080805 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-476gx" Dec 06 08:07:34 crc kubenswrapper[4823]: I1206 08:07:34.180020 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d8ff444-56b3-410b-adf2-d69302dcdb96-utilities\") pod \"3d8ff444-56b3-410b-adf2-d69302dcdb96\" (UID: \"3d8ff444-56b3-410b-adf2-d69302dcdb96\") " Dec 06 08:07:34 crc kubenswrapper[4823]: I1206 08:07:34.180288 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9xzs\" (UniqueName: \"kubernetes.io/projected/3d8ff444-56b3-410b-adf2-d69302dcdb96-kube-api-access-r9xzs\") pod \"3d8ff444-56b3-410b-adf2-d69302dcdb96\" (UID: \"3d8ff444-56b3-410b-adf2-d69302dcdb96\") " Dec 06 08:07:34 crc kubenswrapper[4823]: I1206 08:07:34.180361 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d8ff444-56b3-410b-adf2-d69302dcdb96-catalog-content\") pod \"3d8ff444-56b3-410b-adf2-d69302dcdb96\" (UID: \"3d8ff444-56b3-410b-adf2-d69302dcdb96\") " Dec 06 08:07:34 crc kubenswrapper[4823]: I1206 08:07:34.182559 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d8ff444-56b3-410b-adf2-d69302dcdb96-utilities" (OuterVolumeSpecName: "utilities") pod "3d8ff444-56b3-410b-adf2-d69302dcdb96" (UID: "3d8ff444-56b3-410b-adf2-d69302dcdb96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:07:34 crc kubenswrapper[4823]: I1206 08:07:34.191070 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d8ff444-56b3-410b-adf2-d69302dcdb96-kube-api-access-r9xzs" (OuterVolumeSpecName: "kube-api-access-r9xzs") pod "3d8ff444-56b3-410b-adf2-d69302dcdb96" (UID: "3d8ff444-56b3-410b-adf2-d69302dcdb96"). InnerVolumeSpecName "kube-api-access-r9xzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:07:34 crc kubenswrapper[4823]: I1206 08:07:34.206032 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d8ff444-56b3-410b-adf2-d69302dcdb96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d8ff444-56b3-410b-adf2-d69302dcdb96" (UID: "3d8ff444-56b3-410b-adf2-d69302dcdb96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:07:34 crc kubenswrapper[4823]: I1206 08:07:34.282559 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9xzs\" (UniqueName: \"kubernetes.io/projected/3d8ff444-56b3-410b-adf2-d69302dcdb96-kube-api-access-r9xzs\") on node \"crc\" DevicePath \"\"" Dec 06 08:07:34 crc kubenswrapper[4823]: I1206 08:07:34.283151 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d8ff444-56b3-410b-adf2-d69302dcdb96-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 08:07:34 crc kubenswrapper[4823]: I1206 08:07:34.283217 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d8ff444-56b3-410b-adf2-d69302dcdb96-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 08:07:34 crc kubenswrapper[4823]: I1206 08:07:34.589287 4823 generic.go:334] "Generic (PLEG): container finished" podID="3d8ff444-56b3-410b-adf2-d69302dcdb96" containerID="74cc7104d22b3f194a0db15ec9c24f71b15db3406ad114012c7c91c7a267e087" exitCode=0 Dec 06 08:07:34 crc kubenswrapper[4823]: I1206 08:07:34.589367 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-476gx" Dec 06 08:07:34 crc kubenswrapper[4823]: I1206 08:07:34.589382 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-476gx" event={"ID":"3d8ff444-56b3-410b-adf2-d69302dcdb96","Type":"ContainerDied","Data":"74cc7104d22b3f194a0db15ec9c24f71b15db3406ad114012c7c91c7a267e087"} Dec 06 08:07:34 crc kubenswrapper[4823]: I1206 08:07:34.589428 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-476gx" event={"ID":"3d8ff444-56b3-410b-adf2-d69302dcdb96","Type":"ContainerDied","Data":"c24692797557bfd821f01a3ae02ac7e63b974a909881955b1f756f3c14e32430"} Dec 06 08:07:34 crc kubenswrapper[4823]: I1206 08:07:34.589448 4823 scope.go:117] "RemoveContainer" containerID="74cc7104d22b3f194a0db15ec9c24f71b15db3406ad114012c7c91c7a267e087" Dec 06 08:07:34 crc kubenswrapper[4823]: I1206 08:07:34.624426 4823 scope.go:117] "RemoveContainer" containerID="1b3827de0dcac15c4f42667a837ca218b41460dcf6fda67551e0b24d09aef37f" Dec 06 08:07:34 crc kubenswrapper[4823]: I1206 08:07:34.625127 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-476gx"] Dec 06 08:07:34 crc kubenswrapper[4823]: I1206 08:07:34.646526 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-476gx"] Dec 06 08:07:34 crc kubenswrapper[4823]: I1206 08:07:34.648256 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xd9rp" Dec 06 08:07:34 crc kubenswrapper[4823]: I1206 08:07:34.656641 4823 scope.go:117] "RemoveContainer" containerID="05e05d450a8d195dd064dd722b3a48c5651ed2e8e54c7c7ef4f2bdd03a516296" Dec 06 08:07:34 crc kubenswrapper[4823]: I1206 08:07:34.714411 4823 scope.go:117] "RemoveContainer" containerID="74cc7104d22b3f194a0db15ec9c24f71b15db3406ad114012c7c91c7a267e087" Dec 06 08:07:34 crc kubenswrapper[4823]: E1206 08:07:34.715151 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74cc7104d22b3f194a0db15ec9c24f71b15db3406ad114012c7c91c7a267e087\": container with ID starting with 74cc7104d22b3f194a0db15ec9c24f71b15db3406ad114012c7c91c7a267e087 not found: ID does not exist" containerID="74cc7104d22b3f194a0db15ec9c24f71b15db3406ad114012c7c91c7a267e087" Dec 06 08:07:34 crc kubenswrapper[4823]: I1206 08:07:34.715208 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74cc7104d22b3f194a0db15ec9c24f71b15db3406ad114012c7c91c7a267e087"} err="failed to get container status \"74cc7104d22b3f194a0db15ec9c24f71b15db3406ad114012c7c91c7a267e087\": rpc error: code = NotFound desc = could not find container \"74cc7104d22b3f194a0db15ec9c24f71b15db3406ad114012c7c91c7a267e087\": container with ID starting with 74cc7104d22b3f194a0db15ec9c24f71b15db3406ad114012c7c91c7a267e087 not found: ID does not exist" Dec 06 08:07:34 crc kubenswrapper[4823]: I1206 08:07:34.715238 4823 scope.go:117] "RemoveContainer" containerID="1b3827de0dcac15c4f42667a837ca218b41460dcf6fda67551e0b24d09aef37f" Dec 06 08:07:34 crc kubenswrapper[4823]: E1206 08:07:34.715794 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b3827de0dcac15c4f42667a837ca218b41460dcf6fda67551e0b24d09aef37f\": container with ID starting with 1b3827de0dcac15c4f42667a837ca218b41460dcf6fda67551e0b24d09aef37f not found: ID does not exist" containerID="1b3827de0dcac15c4f42667a837ca218b41460dcf6fda67551e0b24d09aef37f" Dec 06 08:07:34 crc kubenswrapper[4823]: I1206 08:07:34.715839 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b3827de0dcac15c4f42667a837ca218b41460dcf6fda67551e0b24d09aef37f"} err="failed to get container status \"1b3827de0dcac15c4f42667a837ca218b41460dcf6fda67551e0b24d09aef37f\": rpc error: code = NotFound desc = could not find container \"1b3827de0dcac15c4f42667a837ca218b41460dcf6fda67551e0b24d09aef37f\": container with ID starting with 1b3827de0dcac15c4f42667a837ca218b41460dcf6fda67551e0b24d09aef37f not found: ID does not exist" Dec 06 08:07:34 crc kubenswrapper[4823]: I1206 08:07:34.715897 4823 scope.go:117] "RemoveContainer" containerID="05e05d450a8d195dd064dd722b3a48c5651ed2e8e54c7c7ef4f2bdd03a516296" Dec 06 08:07:34 crc kubenswrapper[4823]: E1206 08:07:34.716327 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05e05d450a8d195dd064dd722b3a48c5651ed2e8e54c7c7ef4f2bdd03a516296\": container with ID starting with 05e05d450a8d195dd064dd722b3a48c5651ed2e8e54c7c7ef4f2bdd03a516296 not found: ID does not exist" containerID="05e05d450a8d195dd064dd722b3a48c5651ed2e8e54c7c7ef4f2bdd03a516296" Dec 06 08:07:34 crc kubenswrapper[4823]: I1206 08:07:34.716354 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05e05d450a8d195dd064dd722b3a48c5651ed2e8e54c7c7ef4f2bdd03a516296"} err="failed to get container status \"05e05d450a8d195dd064dd722b3a48c5651ed2e8e54c7c7ef4f2bdd03a516296\": rpc error: code = NotFound desc = could not find container \"05e05d450a8d195dd064dd722b3a48c5651ed2e8e54c7c7ef4f2bdd03a516296\": container with ID starting with 05e05d450a8d195dd064dd722b3a48c5651ed2e8e54c7c7ef4f2bdd03a516296 not found: ID does not exist" Dec 06 08:07:35 crc kubenswrapper[4823]: I1206 08:07:35.152355 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d8ff444-56b3-410b-adf2-d69302dcdb96" path="/var/lib/kubelet/pods/3d8ff444-56b3-410b-adf2-d69302dcdb96/volumes" Dec 06 08:07:35 crc kubenswrapper[4823]: I1206 08:07:35.784200 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jmqgf/must-gather-zwlf6"] Dec 06 08:07:35 crc kubenswrapper[4823]: I1206 08:07:35.784496 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-jmqgf/must-gather-zwlf6" podUID="68e86d3f-c3d7-451b-9c68-d318eb241e87" containerName="copy" containerID="cri-o://8cac78f41bb58c87b1d1508bbd80443931505dedf6130d7150af3815d94e1227" gracePeriod=2 Dec 06 08:07:35 crc kubenswrapper[4823]: I1206 08:07:35.793866 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jmqgf/must-gather-zwlf6"] Dec 06 08:07:36 crc kubenswrapper[4823]: I1206 08:07:36.232413 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jmqgf_must-gather-zwlf6_68e86d3f-c3d7-451b-9c68-d318eb241e87/copy/0.log" Dec 06 08:07:36 crc kubenswrapper[4823]: I1206 08:07:36.233093 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jmqgf/must-gather-zwlf6" Dec 06 08:07:36 crc kubenswrapper[4823]: I1206 08:07:36.425968 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4rr9\" (UniqueName: \"kubernetes.io/projected/68e86d3f-c3d7-451b-9c68-d318eb241e87-kube-api-access-l4rr9\") pod \"68e86d3f-c3d7-451b-9c68-d318eb241e87\" (UID: \"68e86d3f-c3d7-451b-9c68-d318eb241e87\") " Dec 06 08:07:36 crc kubenswrapper[4823]: I1206 08:07:36.426238 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/68e86d3f-c3d7-451b-9c68-d318eb241e87-must-gather-output\") pod \"68e86d3f-c3d7-451b-9c68-d318eb241e87\" (UID: \"68e86d3f-c3d7-451b-9c68-d318eb241e87\") " Dec 06 08:07:36 crc kubenswrapper[4823]: I1206 08:07:36.436528 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68e86d3f-c3d7-451b-9c68-d318eb241e87-kube-api-access-l4rr9" (OuterVolumeSpecName: "kube-api-access-l4rr9") pod "68e86d3f-c3d7-451b-9c68-d318eb241e87" (UID: "68e86d3f-c3d7-451b-9c68-d318eb241e87"). InnerVolumeSpecName "kube-api-access-l4rr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:07:36 crc kubenswrapper[4823]: I1206 08:07:36.532126 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4rr9\" (UniqueName: \"kubernetes.io/projected/68e86d3f-c3d7-451b-9c68-d318eb241e87-kube-api-access-l4rr9\") on node \"crc\" DevicePath \"\"" Dec 06 08:07:36 crc kubenswrapper[4823]: I1206 08:07:36.613703 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jmqgf_must-gather-zwlf6_68e86d3f-c3d7-451b-9c68-d318eb241e87/copy/0.log" Dec 06 08:07:36 crc kubenswrapper[4823]: I1206 08:07:36.614775 4823 generic.go:334] "Generic (PLEG): container finished" podID="68e86d3f-c3d7-451b-9c68-d318eb241e87" containerID="8cac78f41bb58c87b1d1508bbd80443931505dedf6130d7150af3815d94e1227" exitCode=143 Dec 06 08:07:36 crc kubenswrapper[4823]: I1206 08:07:36.614851 4823 scope.go:117] "RemoveContainer" containerID="8cac78f41bb58c87b1d1508bbd80443931505dedf6130d7150af3815d94e1227" Dec 06 08:07:36 crc kubenswrapper[4823]: I1206 08:07:36.614900 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jmqgf/must-gather-zwlf6" Dec 06 08:07:36 crc kubenswrapper[4823]: I1206 08:07:36.647844 4823 scope.go:117] "RemoveContainer" containerID="4653bd470364978983bfe1277e3ad4df54423bfa182e4be99d0e0162a7f2662f" Dec 06 08:07:36 crc kubenswrapper[4823]: I1206 08:07:36.654847 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68e86d3f-c3d7-451b-9c68-d318eb241e87-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "68e86d3f-c3d7-451b-9c68-d318eb241e87" (UID: "68e86d3f-c3d7-451b-9c68-d318eb241e87"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:07:36 crc kubenswrapper[4823]: I1206 08:07:36.738824 4823 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/68e86d3f-c3d7-451b-9c68-d318eb241e87-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 06 08:07:36 crc kubenswrapper[4823]: I1206 08:07:36.738969 4823 scope.go:117] "RemoveContainer" containerID="8cac78f41bb58c87b1d1508bbd80443931505dedf6130d7150af3815d94e1227" Dec 06 08:07:36 crc kubenswrapper[4823]: E1206 08:07:36.739700 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cac78f41bb58c87b1d1508bbd80443931505dedf6130d7150af3815d94e1227\": container with ID starting with 8cac78f41bb58c87b1d1508bbd80443931505dedf6130d7150af3815d94e1227 not found: ID does not exist" containerID="8cac78f41bb58c87b1d1508bbd80443931505dedf6130d7150af3815d94e1227" Dec 06 08:07:36 crc kubenswrapper[4823]: I1206 08:07:36.739743 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cac78f41bb58c87b1d1508bbd80443931505dedf6130d7150af3815d94e1227"} err="failed to get container status \"8cac78f41bb58c87b1d1508bbd80443931505dedf6130d7150af3815d94e1227\": rpc error: code = NotFound desc = could not find container \"8cac78f41bb58c87b1d1508bbd80443931505dedf6130d7150af3815d94e1227\": container with ID starting with 8cac78f41bb58c87b1d1508bbd80443931505dedf6130d7150af3815d94e1227 not found: ID does not exist" Dec 06 08:07:36 crc kubenswrapper[4823]: I1206 08:07:36.739765 4823 scope.go:117] "RemoveContainer" containerID="4653bd470364978983bfe1277e3ad4df54423bfa182e4be99d0e0162a7f2662f" Dec 06 08:07:36 crc kubenswrapper[4823]: E1206 08:07:36.739983 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4653bd470364978983bfe1277e3ad4df54423bfa182e4be99d0e0162a7f2662f\": container with ID starting with 4653bd470364978983bfe1277e3ad4df54423bfa182e4be99d0e0162a7f2662f not found: ID does not exist" containerID="4653bd470364978983bfe1277e3ad4df54423bfa182e4be99d0e0162a7f2662f" Dec 06 08:07:36 crc kubenswrapper[4823]: I1206 08:07:36.740020 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4653bd470364978983bfe1277e3ad4df54423bfa182e4be99d0e0162a7f2662f"} err="failed to get container status \"4653bd470364978983bfe1277e3ad4df54423bfa182e4be99d0e0162a7f2662f\": rpc error: code = NotFound desc = could not find container \"4653bd470364978983bfe1277e3ad4df54423bfa182e4be99d0e0162a7f2662f\": container with ID starting with 4653bd470364978983bfe1277e3ad4df54423bfa182e4be99d0e0162a7f2662f not found: ID does not exist" Dec 06 08:07:36 crc kubenswrapper[4823]: I1206 08:07:36.886597 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xd9rp"] Dec 06 08:07:36 crc kubenswrapper[4823]: I1206 08:07:36.886893 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xd9rp" podUID="5fdee1f4-8a2b-49be-bf86-47a92260a830" containerName="registry-server" containerID="cri-o://408be40e2d8c6de119e33229d0bdc79c8d9df0107dc6ddce419982890acc4d4e" gracePeriod=2 Dec 06 08:07:37 crc kubenswrapper[4823]: I1206 08:07:37.151512 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68e86d3f-c3d7-451b-9c68-d318eb241e87" path="/var/lib/kubelet/pods/68e86d3f-c3d7-451b-9c68-d318eb241e87/volumes" Dec 06 08:07:37 crc kubenswrapper[4823]: I1206 08:07:37.631919 4823 generic.go:334] "Generic (PLEG): container finished" podID="5fdee1f4-8a2b-49be-bf86-47a92260a830" containerID="408be40e2d8c6de119e33229d0bdc79c8d9df0107dc6ddce419982890acc4d4e" exitCode=0 Dec 06 08:07:37 crc kubenswrapper[4823]: I1206 08:07:37.632129 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd9rp" event={"ID":"5fdee1f4-8a2b-49be-bf86-47a92260a830","Type":"ContainerDied","Data":"408be40e2d8c6de119e33229d0bdc79c8d9df0107dc6ddce419982890acc4d4e"} Dec 06 08:07:37 crc kubenswrapper[4823]: I1206 08:07:37.875241 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xd9rp" Dec 06 08:07:38 crc kubenswrapper[4823]: I1206 08:07:38.071254 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fdee1f4-8a2b-49be-bf86-47a92260a830-catalog-content\") pod \"5fdee1f4-8a2b-49be-bf86-47a92260a830\" (UID: \"5fdee1f4-8a2b-49be-bf86-47a92260a830\") " Dec 06 08:07:38 crc kubenswrapper[4823]: I1206 08:07:38.071475 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fdee1f4-8a2b-49be-bf86-47a92260a830-utilities\") pod \"5fdee1f4-8a2b-49be-bf86-47a92260a830\" (UID: \"5fdee1f4-8a2b-49be-bf86-47a92260a830\") " Dec 06 08:07:38 crc kubenswrapper[4823]: I1206 08:07:38.071534 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4gz2\" (UniqueName: \"kubernetes.io/projected/5fdee1f4-8a2b-49be-bf86-47a92260a830-kube-api-access-n4gz2\") pod \"5fdee1f4-8a2b-49be-bf86-47a92260a830\" (UID: \"5fdee1f4-8a2b-49be-bf86-47a92260a830\") " Dec 06 08:07:38 crc kubenswrapper[4823]: I1206 08:07:38.075193 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fdee1f4-8a2b-49be-bf86-47a92260a830-utilities" (OuterVolumeSpecName: "utilities") pod "5fdee1f4-8a2b-49be-bf86-47a92260a830" (UID: "5fdee1f4-8a2b-49be-bf86-47a92260a830"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:07:38 crc kubenswrapper[4823]: I1206 08:07:38.110967 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fdee1f4-8a2b-49be-bf86-47a92260a830-kube-api-access-n4gz2" (OuterVolumeSpecName: "kube-api-access-n4gz2") pod "5fdee1f4-8a2b-49be-bf86-47a92260a830" (UID: "5fdee1f4-8a2b-49be-bf86-47a92260a830"). InnerVolumeSpecName "kube-api-access-n4gz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:07:38 crc kubenswrapper[4823]: I1206 08:07:38.157608 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fdee1f4-8a2b-49be-bf86-47a92260a830-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5fdee1f4-8a2b-49be-bf86-47a92260a830" (UID: "5fdee1f4-8a2b-49be-bf86-47a92260a830"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:07:38 crc kubenswrapper[4823]: I1206 08:07:38.173065 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fdee1f4-8a2b-49be-bf86-47a92260a830-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 08:07:38 crc kubenswrapper[4823]: I1206 08:07:38.173108 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fdee1f4-8a2b-49be-bf86-47a92260a830-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 08:07:38 crc kubenswrapper[4823]: I1206 08:07:38.173147 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4gz2\" (UniqueName: \"kubernetes.io/projected/5fdee1f4-8a2b-49be-bf86-47a92260a830-kube-api-access-n4gz2\") on node \"crc\" DevicePath \"\"" Dec 06 08:07:38 crc kubenswrapper[4823]: I1206 08:07:38.649800 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd9rp" event={"ID":"5fdee1f4-8a2b-49be-bf86-47a92260a830","Type":"ContainerDied","Data":"b570ddf59490ba50c6e4bead2ec1348980c708f2521a971a2dff98e425644371"} Dec 06 08:07:38 crc kubenswrapper[4823]: I1206 08:07:38.649869 4823 scope.go:117] "RemoveContainer" containerID="408be40e2d8c6de119e33229d0bdc79c8d9df0107dc6ddce419982890acc4d4e" Dec 06 08:07:38 crc kubenswrapper[4823]: I1206 08:07:38.649911 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xd9rp" Dec 06 08:07:38 crc kubenswrapper[4823]: I1206 08:07:38.700569 4823 scope.go:117] "RemoveContainer" containerID="d4a1eca6c32ece6b122a78ff78ea932eecf53ad613057cf92e760392011bd4ba" Dec 06 08:07:38 crc kubenswrapper[4823]: I1206 08:07:38.707602 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xd9rp"] Dec 06 08:07:38 crc kubenswrapper[4823]: I1206 08:07:38.719197 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xd9rp"] Dec 06 08:07:38 crc kubenswrapper[4823]: I1206 08:07:38.732627 4823 scope.go:117] "RemoveContainer" containerID="f5aecb8efa18648fe7c6e811e55df8ffec213a7f0b1428682d430f6945faaaed" Dec 06 08:07:39 crc kubenswrapper[4823]: I1206 08:07:39.153700 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fdee1f4-8a2b-49be-bf86-47a92260a830" path="/var/lib/kubelet/pods/5fdee1f4-8a2b-49be-bf86-47a92260a830/volumes" Dec 06 08:07:46 crc kubenswrapper[4823]: I1206 08:07:46.734067 4823 scope.go:117] "RemoveContainer" containerID="c012d5d826685f84e0ba7cd1261a0769a3274e53116927847a88f2df78507289" Dec 06 08:07:58 crc kubenswrapper[4823]: I1206 08:07:58.401703 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dcp2w"] Dec 06 08:07:58 crc kubenswrapper[4823]: E1206 08:07:58.402756 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e86d3f-c3d7-451b-9c68-d318eb241e87" containerName="copy" Dec 06 08:07:58 crc kubenswrapper[4823]: I1206 08:07:58.402777 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e86d3f-c3d7-451b-9c68-d318eb241e87" containerName="copy" Dec 06 08:07:58 crc kubenswrapper[4823]: E1206 08:07:58.402810 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fdee1f4-8a2b-49be-bf86-47a92260a830" containerName="extract-content" Dec 06 08:07:58 crc kubenswrapper[4823]: I1206 08:07:58.402818 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fdee1f4-8a2b-49be-bf86-47a92260a830" containerName="extract-content" Dec 06 08:07:58 crc kubenswrapper[4823]: E1206 08:07:58.402835 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d8ff444-56b3-410b-adf2-d69302dcdb96" containerName="registry-server" Dec 06 08:07:58 crc kubenswrapper[4823]: I1206 08:07:58.402842 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d8ff444-56b3-410b-adf2-d69302dcdb96" containerName="registry-server" Dec 06 08:07:58 crc kubenswrapper[4823]: E1206 08:07:58.402856 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e86d3f-c3d7-451b-9c68-d318eb241e87" containerName="gather" Dec 06 08:07:58 crc kubenswrapper[4823]: I1206 08:07:58.402864 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e86d3f-c3d7-451b-9c68-d318eb241e87" containerName="gather" Dec 06 08:07:58 crc kubenswrapper[4823]: E1206 08:07:58.402879 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fdee1f4-8a2b-49be-bf86-47a92260a830" containerName="registry-server" Dec 06 08:07:58 crc kubenswrapper[4823]: I1206 08:07:58.402886 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fdee1f4-8a2b-49be-bf86-47a92260a830" containerName="registry-server" Dec 06 08:07:58 crc kubenswrapper[4823]: E1206 08:07:58.402912 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d8ff444-56b3-410b-adf2-d69302dcdb96" containerName="extract-content" Dec 06 08:07:58 crc kubenswrapper[4823]: I1206 08:07:58.402918 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d8ff444-56b3-410b-adf2-d69302dcdb96" containerName="extract-content" Dec 06 08:07:58 crc kubenswrapper[4823]: E1206 08:07:58.402928 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d8ff444-56b3-410b-adf2-d69302dcdb96" containerName="extract-utilities" Dec 06 08:07:58 crc kubenswrapper[4823]: I1206 08:07:58.402938 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d8ff444-56b3-410b-adf2-d69302dcdb96" containerName="extract-utilities" Dec 06 08:07:58 crc kubenswrapper[4823]: E1206 08:07:58.402953 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fdee1f4-8a2b-49be-bf86-47a92260a830" containerName="extract-utilities" Dec 06 08:07:58 crc kubenswrapper[4823]: I1206 08:07:58.402961 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fdee1f4-8a2b-49be-bf86-47a92260a830" containerName="extract-utilities" Dec 06 08:07:58 crc kubenswrapper[4823]: I1206 08:07:58.403174 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d8ff444-56b3-410b-adf2-d69302dcdb96" containerName="registry-server" Dec 06 08:07:58 crc kubenswrapper[4823]: I1206 08:07:58.403206 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e86d3f-c3d7-451b-9c68-d318eb241e87" containerName="gather" Dec 06 08:07:58 crc kubenswrapper[4823]: I1206 08:07:58.403232 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e86d3f-c3d7-451b-9c68-d318eb241e87" containerName="copy" Dec 06 08:07:58 crc kubenswrapper[4823]: I1206 08:07:58.403248 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fdee1f4-8a2b-49be-bf86-47a92260a830" containerName="registry-server" Dec 06 08:07:58 crc kubenswrapper[4823]: I1206 08:07:58.408707 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dcp2w" Dec 06 08:07:58 crc kubenswrapper[4823]: I1206 08:07:58.444131 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dcp2w"] Dec 06 08:07:58 crc kubenswrapper[4823]: I1206 08:07:58.494480 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njbgn\" (UniqueName: \"kubernetes.io/projected/ed90c8c0-ea5c-42e9-aa82-f13343045bcc-kube-api-access-njbgn\") pod \"community-operators-dcp2w\" (UID: \"ed90c8c0-ea5c-42e9-aa82-f13343045bcc\") " pod="openshift-marketplace/community-operators-dcp2w" Dec 06 08:07:58 crc kubenswrapper[4823]: I1206 08:07:58.494681 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed90c8c0-ea5c-42e9-aa82-f13343045bcc-utilities\") pod \"community-operators-dcp2w\" (UID: \"ed90c8c0-ea5c-42e9-aa82-f13343045bcc\") " pod="openshift-marketplace/community-operators-dcp2w" Dec 06 08:07:58 crc kubenswrapper[4823]: I1206 08:07:58.494739 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed90c8c0-ea5c-42e9-aa82-f13343045bcc-catalog-content\") pod \"community-operators-dcp2w\" (UID: \"ed90c8c0-ea5c-42e9-aa82-f13343045bcc\") " pod="openshift-marketplace/community-operators-dcp2w" Dec 06 08:07:58 crc kubenswrapper[4823]: I1206 08:07:58.596837 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njbgn\" (UniqueName: \"kubernetes.io/projected/ed90c8c0-ea5c-42e9-aa82-f13343045bcc-kube-api-access-njbgn\") pod \"community-operators-dcp2w\" (UID: \"ed90c8c0-ea5c-42e9-aa82-f13343045bcc\") " pod="openshift-marketplace/community-operators-dcp2w" Dec 06 08:07:58 crc kubenswrapper[4823]: I1206 08:07:58.597019 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed90c8c0-ea5c-42e9-aa82-f13343045bcc-utilities\") pod \"community-operators-dcp2w\" (UID: \"ed90c8c0-ea5c-42e9-aa82-f13343045bcc\") " pod="openshift-marketplace/community-operators-dcp2w" Dec 06 08:07:58 crc kubenswrapper[4823]: I1206 08:07:58.597079 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed90c8c0-ea5c-42e9-aa82-f13343045bcc-catalog-content\") pod \"community-operators-dcp2w\" (UID: \"ed90c8c0-ea5c-42e9-aa82-f13343045bcc\") " pod="openshift-marketplace/community-operators-dcp2w" Dec 06 08:07:58 crc kubenswrapper[4823]: I1206 08:07:58.597606 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed90c8c0-ea5c-42e9-aa82-f13343045bcc-catalog-content\") pod \"community-operators-dcp2w\" (UID: \"ed90c8c0-ea5c-42e9-aa82-f13343045bcc\") " pod="openshift-marketplace/community-operators-dcp2w" Dec 06 08:07:58 crc kubenswrapper[4823]: I1206 08:07:58.597676 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed90c8c0-ea5c-42e9-aa82-f13343045bcc-utilities\") pod \"community-operators-dcp2w\" (UID: \"ed90c8c0-ea5c-42e9-aa82-f13343045bcc\") " pod="openshift-marketplace/community-operators-dcp2w" Dec 06 08:07:58 crc kubenswrapper[4823]: I1206 08:07:58.616729 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njbgn\" (UniqueName: \"kubernetes.io/projected/ed90c8c0-ea5c-42e9-aa82-f13343045bcc-kube-api-access-njbgn\") pod \"community-operators-dcp2w\" (UID: \"ed90c8c0-ea5c-42e9-aa82-f13343045bcc\") " pod="openshift-marketplace/community-operators-dcp2w" Dec 06 08:07:58 crc kubenswrapper[4823]: I1206 08:07:58.736921 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dcp2w" Dec 06 08:07:59 crc kubenswrapper[4823]: I1206 08:07:59.294420 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dcp2w"] Dec 06 08:07:59 crc kubenswrapper[4823]: I1206 08:07:59.854107 4823 generic.go:334] "Generic (PLEG): container finished" podID="ed90c8c0-ea5c-42e9-aa82-f13343045bcc" containerID="c7f8d95f742fdd71846f691db3464b74743cd868622668b23e7561fa466be5fa" exitCode=0 Dec 06 08:07:59 crc kubenswrapper[4823]: I1206 08:07:59.854291 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dcp2w" event={"ID":"ed90c8c0-ea5c-42e9-aa82-f13343045bcc","Type":"ContainerDied","Data":"c7f8d95f742fdd71846f691db3464b74743cd868622668b23e7561fa466be5fa"} Dec 06 08:07:59 crc kubenswrapper[4823]: I1206 08:07:59.854452 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dcp2w" event={"ID":"ed90c8c0-ea5c-42e9-aa82-f13343045bcc","Type":"ContainerStarted","Data":"971a9604501445b4bf1094a83a2a88c54032ee183154c0eee848f0094a4185b0"} Dec 06 08:08:00 crc kubenswrapper[4823]: I1206 08:08:00.865241 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dcp2w" event={"ID":"ed90c8c0-ea5c-42e9-aa82-f13343045bcc","Type":"ContainerStarted","Data":"2dbede52d51512fc5335969bb35cc235dca2dec0076eb79f88766dd9ef57417e"} Dec 06 08:08:01 crc kubenswrapper[4823]: I1206 08:08:01.876985 4823 generic.go:334] "Generic (PLEG): container finished" podID="ed90c8c0-ea5c-42e9-aa82-f13343045bcc" containerID="2dbede52d51512fc5335969bb35cc235dca2dec0076eb79f88766dd9ef57417e" exitCode=0 Dec 06 08:08:01 crc kubenswrapper[4823]: I1206 08:08:01.877036 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dcp2w" event={"ID":"ed90c8c0-ea5c-42e9-aa82-f13343045bcc","Type":"ContainerDied","Data":"2dbede52d51512fc5335969bb35cc235dca2dec0076eb79f88766dd9ef57417e"} Dec 06 08:08:02 crc kubenswrapper[4823]: I1206 08:08:02.888777 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dcp2w" event={"ID":"ed90c8c0-ea5c-42e9-aa82-f13343045bcc","Type":"ContainerStarted","Data":"40c1a5c42b51f1a8f1020c79c66dd51f4aaf305ee1e1276322ff98ec5a51cd84"} Dec 06 08:08:02 crc kubenswrapper[4823]: I1206 08:08:02.914988 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dcp2w" podStartSLOduration=2.498055291 podStartE2EDuration="4.914965995s" podCreationTimestamp="2025-12-06 08:07:58 +0000 UTC" firstStartedPulling="2025-12-06 08:07:59.856259691 +0000 UTC m=+6181.142011651" lastFinishedPulling="2025-12-06 08:08:02.273170395 +0000 UTC m=+6183.558922355" observedRunningTime="2025-12-06 08:08:02.911208377 +0000 UTC m=+6184.196960347" watchObservedRunningTime="2025-12-06 08:08:02.914965995 +0000 UTC m=+6184.200717955" Dec 06 08:08:06 crc kubenswrapper[4823]: I1206 08:08:06.052138 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:08:06 crc kubenswrapper[4823]: I1206 08:08:06.052468 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:08:08 crc kubenswrapper[4823]: I1206 08:08:08.737197 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dcp2w" Dec 06 08:08:08 crc kubenswrapper[4823]: I1206 08:08:08.737560 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dcp2w" Dec 06 08:08:08 crc kubenswrapper[4823]: I1206 08:08:08.787248 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dcp2w" Dec 06 08:08:08 crc kubenswrapper[4823]: I1206 08:08:08.986541 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dcp2w" Dec 06 08:08:09 crc kubenswrapper[4823]: I1206 08:08:09.039142 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dcp2w"] Dec 06 08:08:10 crc kubenswrapper[4823]: I1206 08:08:10.964075 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dcp2w" podUID="ed90c8c0-ea5c-42e9-aa82-f13343045bcc" containerName="registry-server" containerID="cri-o://40c1a5c42b51f1a8f1020c79c66dd51f4aaf305ee1e1276322ff98ec5a51cd84" gracePeriod=2 Dec 06 08:08:11 crc kubenswrapper[4823]: I1206 08:08:11.963881 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dcp2w" Dec 06 08:08:11 crc kubenswrapper[4823]: I1206 08:08:11.974384 4823 generic.go:334] "Generic (PLEG): container finished" podID="ed90c8c0-ea5c-42e9-aa82-f13343045bcc" containerID="40c1a5c42b51f1a8f1020c79c66dd51f4aaf305ee1e1276322ff98ec5a51cd84" exitCode=0 Dec 06 08:08:11 crc kubenswrapper[4823]: I1206 08:08:11.974453 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dcp2w" Dec 06 08:08:11 crc kubenswrapper[4823]: I1206 08:08:11.974464 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dcp2w" event={"ID":"ed90c8c0-ea5c-42e9-aa82-f13343045bcc","Type":"ContainerDied","Data":"40c1a5c42b51f1a8f1020c79c66dd51f4aaf305ee1e1276322ff98ec5a51cd84"} Dec 06 08:08:11 crc kubenswrapper[4823]: I1206 08:08:11.974557 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dcp2w" event={"ID":"ed90c8c0-ea5c-42e9-aa82-f13343045bcc","Type":"ContainerDied","Data":"971a9604501445b4bf1094a83a2a88c54032ee183154c0eee848f0094a4185b0"} Dec 06 08:08:11 crc kubenswrapper[4823]: I1206 08:08:11.974587 4823 scope.go:117] "RemoveContainer" containerID="40c1a5c42b51f1a8f1020c79c66dd51f4aaf305ee1e1276322ff98ec5a51cd84" Dec 06 08:08:12 crc kubenswrapper[4823]: I1206 08:08:12.006262 4823 scope.go:117] "RemoveContainer" containerID="2dbede52d51512fc5335969bb35cc235dca2dec0076eb79f88766dd9ef57417e" Dec 06 08:08:12 crc kubenswrapper[4823]: I1206 08:08:12.026349 4823 scope.go:117] "RemoveContainer" containerID="c7f8d95f742fdd71846f691db3464b74743cd868622668b23e7561fa466be5fa" Dec 06 08:08:12 crc kubenswrapper[4823]: I1206 08:08:12.080945 4823 scope.go:117] "RemoveContainer" containerID="40c1a5c42b51f1a8f1020c79c66dd51f4aaf305ee1e1276322ff98ec5a51cd84" Dec 06 08:08:12 crc kubenswrapper[4823]: E1206 08:08:12.081545 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40c1a5c42b51f1a8f1020c79c66dd51f4aaf305ee1e1276322ff98ec5a51cd84\": container with ID starting with 40c1a5c42b51f1a8f1020c79c66dd51f4aaf305ee1e1276322ff98ec5a51cd84 not found: ID does not exist" containerID="40c1a5c42b51f1a8f1020c79c66dd51f4aaf305ee1e1276322ff98ec5a51cd84" Dec 06 08:08:12 crc kubenswrapper[4823]: I1206 08:08:12.081605 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40c1a5c42b51f1a8f1020c79c66dd51f4aaf305ee1e1276322ff98ec5a51cd84"} err="failed to get container status \"40c1a5c42b51f1a8f1020c79c66dd51f4aaf305ee1e1276322ff98ec5a51cd84\": rpc error: code = NotFound desc = could not find container \"40c1a5c42b51f1a8f1020c79c66dd51f4aaf305ee1e1276322ff98ec5a51cd84\": container with ID starting with 40c1a5c42b51f1a8f1020c79c66dd51f4aaf305ee1e1276322ff98ec5a51cd84 not found: ID does not exist" Dec 06 08:08:12 crc kubenswrapper[4823]: I1206 08:08:12.081634 4823 scope.go:117] "RemoveContainer" containerID="2dbede52d51512fc5335969bb35cc235dca2dec0076eb79f88766dd9ef57417e" Dec 06 08:08:12 crc kubenswrapper[4823]: E1206 08:08:12.082028 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dbede52d51512fc5335969bb35cc235dca2dec0076eb79f88766dd9ef57417e\": container with ID starting with 2dbede52d51512fc5335969bb35cc235dca2dec0076eb79f88766dd9ef57417e not found: ID does not exist" containerID="2dbede52d51512fc5335969bb35cc235dca2dec0076eb79f88766dd9ef57417e" Dec 06 08:08:12 crc kubenswrapper[4823]: I1206 08:08:12.082085 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dbede52d51512fc5335969bb35cc235dca2dec0076eb79f88766dd9ef57417e"} err="failed to get container status \"2dbede52d51512fc5335969bb35cc235dca2dec0076eb79f88766dd9ef57417e\": rpc error: code = NotFound desc = could not find container \"2dbede52d51512fc5335969bb35cc235dca2dec0076eb79f88766dd9ef57417e\": container with ID starting with 2dbede52d51512fc5335969bb35cc235dca2dec0076eb79f88766dd9ef57417e not found: ID does not exist" Dec 06 08:08:12 crc kubenswrapper[4823]: I1206 08:08:12.082113 4823 scope.go:117] "RemoveContainer" containerID="c7f8d95f742fdd71846f691db3464b74743cd868622668b23e7561fa466be5fa" Dec 06 08:08:12 crc kubenswrapper[4823]: E1206 08:08:12.082352 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7f8d95f742fdd71846f691db3464b74743cd868622668b23e7561fa466be5fa\": container with ID starting with c7f8d95f742fdd71846f691db3464b74743cd868622668b23e7561fa466be5fa not found: ID does not exist" containerID="c7f8d95f742fdd71846f691db3464b74743cd868622668b23e7561fa466be5fa" Dec 06 08:08:12 crc kubenswrapper[4823]: I1206 08:08:12.082381 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7f8d95f742fdd71846f691db3464b74743cd868622668b23e7561fa466be5fa"} err="failed to get container status \"c7f8d95f742fdd71846f691db3464b74743cd868622668b23e7561fa466be5fa\": rpc error: code = NotFound desc = could not find container \"c7f8d95f742fdd71846f691db3464b74743cd868622668b23e7561fa466be5fa\": container with ID starting with c7f8d95f742fdd71846f691db3464b74743cd868622668b23e7561fa466be5fa not found: ID does not exist" Dec 06 08:08:12 crc kubenswrapper[4823]: I1206 08:08:12.088416 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed90c8c0-ea5c-42e9-aa82-f13343045bcc-catalog-content\") pod \"ed90c8c0-ea5c-42e9-aa82-f13343045bcc\" (UID: \"ed90c8c0-ea5c-42e9-aa82-f13343045bcc\") " Dec 06 08:08:12 crc kubenswrapper[4823]: I1206 08:08:12.088898 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed90c8c0-ea5c-42e9-aa82-f13343045bcc-utilities\") pod \"ed90c8c0-ea5c-42e9-aa82-f13343045bcc\" (UID: \"ed90c8c0-ea5c-42e9-aa82-f13343045bcc\") " Dec 06 08:08:12 crc kubenswrapper[4823]: I1206 08:08:12.088956 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njbgn\" (UniqueName: \"kubernetes.io/projected/ed90c8c0-ea5c-42e9-aa82-f13343045bcc-kube-api-access-njbgn\") pod \"ed90c8c0-ea5c-42e9-aa82-f13343045bcc\" (UID: \"ed90c8c0-ea5c-42e9-aa82-f13343045bcc\") " Dec 06 08:08:12 crc kubenswrapper[4823]: I1206 08:08:12.089774 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed90c8c0-ea5c-42e9-aa82-f13343045bcc-utilities" (OuterVolumeSpecName: "utilities") pod "ed90c8c0-ea5c-42e9-aa82-f13343045bcc" (UID: "ed90c8c0-ea5c-42e9-aa82-f13343045bcc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:08:12 crc kubenswrapper[4823]: I1206 08:08:12.099838 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed90c8c0-ea5c-42e9-aa82-f13343045bcc-kube-api-access-njbgn" (OuterVolumeSpecName: "kube-api-access-njbgn") pod "ed90c8c0-ea5c-42e9-aa82-f13343045bcc" (UID: "ed90c8c0-ea5c-42e9-aa82-f13343045bcc"). InnerVolumeSpecName "kube-api-access-njbgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 08:08:12 crc kubenswrapper[4823]: I1206 08:08:12.139260 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed90c8c0-ea5c-42e9-aa82-f13343045bcc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed90c8c0-ea5c-42e9-aa82-f13343045bcc" (UID: "ed90c8c0-ea5c-42e9-aa82-f13343045bcc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 08:08:12 crc kubenswrapper[4823]: I1206 08:08:12.191289 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed90c8c0-ea5c-42e9-aa82-f13343045bcc-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 08:08:12 crc kubenswrapper[4823]: I1206 08:08:12.191328 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njbgn\" (UniqueName: \"kubernetes.io/projected/ed90c8c0-ea5c-42e9-aa82-f13343045bcc-kube-api-access-njbgn\") on node \"crc\" DevicePath \"\"" Dec 06 08:08:12 crc kubenswrapper[4823]: I1206 08:08:12.191339 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed90c8c0-ea5c-42e9-aa82-f13343045bcc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 08:08:12 crc kubenswrapper[4823]: I1206 08:08:12.316647 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dcp2w"] Dec 06 08:08:12 crc kubenswrapper[4823]: I1206 08:08:12.327198 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dcp2w"] Dec 06 08:08:13 crc kubenswrapper[4823]: I1206 08:08:13.152163 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed90c8c0-ea5c-42e9-aa82-f13343045bcc" path="/var/lib/kubelet/pods/ed90c8c0-ea5c-42e9-aa82-f13343045bcc/volumes" Dec 06 08:08:36 crc kubenswrapper[4823]: I1206 08:08:36.052077 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:08:36 crc kubenswrapper[4823]: I1206 08:08:36.052689 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:08:46 crc kubenswrapper[4823]: I1206 08:08:46.859473 4823 scope.go:117] "RemoveContainer" containerID="61f82de5e54a338b4a5edbe49f329b7245399a61cbaea3344f802b99b4bd00f5" Dec 06 08:09:06 crc kubenswrapper[4823]: I1206 08:09:06.051604 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:09:06 crc kubenswrapper[4823]: I1206 08:09:06.052180 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:09:06 crc kubenswrapper[4823]: I1206 08:09:06.052244 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 08:09:06 crc kubenswrapper[4823]: I1206 08:09:06.053257 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7c5c3c9d350e5f051586d17e14d2e898ac1f9c5170d0c00982511584718ac52e"} pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 08:09:06 crc kubenswrapper[4823]: I1206 08:09:06.053400 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" containerID="cri-o://7c5c3c9d350e5f051586d17e14d2e898ac1f9c5170d0c00982511584718ac52e" gracePeriod=600 Dec 06 08:09:06 crc kubenswrapper[4823]: I1206 08:09:06.489710 4823 generic.go:334] "Generic (PLEG): container finished" podID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerID="7c5c3c9d350e5f051586d17e14d2e898ac1f9c5170d0c00982511584718ac52e" exitCode=0 Dec 06 08:09:06 crc kubenswrapper[4823]: I1206 08:09:06.489763 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerDied","Data":"7c5c3c9d350e5f051586d17e14d2e898ac1f9c5170d0c00982511584718ac52e"} Dec 06 08:09:06 crc kubenswrapper[4823]: I1206 08:09:06.490023 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerStarted","Data":"769089c185ea64985b113db33b6a6a22a8709903f9f9114772d13ba2ece9b31b"} Dec 06 08:09:06 crc kubenswrapper[4823]: I1206 08:09:06.490060 4823 scope.go:117] "RemoveContainer" containerID="8b15637ce9ffb4ad0acc526f863e62d488186d019ba87c7862522d68f44a208f" Dec 06 08:11:06 crc kubenswrapper[4823]: I1206 08:11:06.052270 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:11:06 crc kubenswrapper[4823]: I1206 08:11:06.053149 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:11:36 crc kubenswrapper[4823]: I1206 08:11:36.051484 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:11:36 crc kubenswrapper[4823]: I1206 08:11:36.052045 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:12:06 crc kubenswrapper[4823]: I1206 08:12:06.051747 4823 patch_prober.go:28] interesting pod/machine-config-daemon-7wlj2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 08:12:06 crc kubenswrapper[4823]: I1206 08:12:06.052273 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 08:12:06 crc kubenswrapper[4823]: I1206 08:12:06.052324 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" Dec 06 08:12:06 crc kubenswrapper[4823]: I1206 08:12:06.053133 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"769089c185ea64985b113db33b6a6a22a8709903f9f9114772d13ba2ece9b31b"} pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 08:12:06 crc kubenswrapper[4823]: I1206 08:12:06.053190 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerName="machine-config-daemon" containerID="cri-o://769089c185ea64985b113db33b6a6a22a8709903f9f9114772d13ba2ece9b31b" gracePeriod=600 Dec 06 08:12:06 crc kubenswrapper[4823]: E1206 08:12:06.184549 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:12:07 crc kubenswrapper[4823]: I1206 08:12:07.177019 4823 generic.go:334] "Generic (PLEG): container finished" podID="69d0518f-7105-49e1-b537-f4de7b8f9a14" containerID="769089c185ea64985b113db33b6a6a22a8709903f9f9114772d13ba2ece9b31b" exitCode=0 Dec 06 08:12:07 crc kubenswrapper[4823]: I1206 08:12:07.177091 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" event={"ID":"69d0518f-7105-49e1-b537-f4de7b8f9a14","Type":"ContainerDied","Data":"769089c185ea64985b113db33b6a6a22a8709903f9f9114772d13ba2ece9b31b"} Dec 06 08:12:07 crc kubenswrapper[4823]: I1206 08:12:07.177412 4823 scope.go:117] "RemoveContainer" containerID="7c5c3c9d350e5f051586d17e14d2e898ac1f9c5170d0c00982511584718ac52e" Dec 06 08:12:07 crc kubenswrapper[4823]: I1206 08:12:07.178277 4823 scope.go:117] "RemoveContainer" containerID="769089c185ea64985b113db33b6a6a22a8709903f9f9114772d13ba2ece9b31b" Dec 06 08:12:07 crc kubenswrapper[4823]: E1206 08:12:07.178622 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:12:22 crc kubenswrapper[4823]: I1206 08:12:22.140750 4823 scope.go:117] "RemoveContainer" containerID="769089c185ea64985b113db33b6a6a22a8709903f9f9114772d13ba2ece9b31b" Dec 06 08:12:22 crc kubenswrapper[4823]: E1206 08:12:22.141576 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:12:36 crc kubenswrapper[4823]: I1206 08:12:36.141438 4823 scope.go:117] "RemoveContainer" containerID="769089c185ea64985b113db33b6a6a22a8709903f9f9114772d13ba2ece9b31b" Dec 06 08:12:36 crc kubenswrapper[4823]: E1206 08:12:36.142364 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:12:50 crc kubenswrapper[4823]: I1206 08:12:50.141475 4823 scope.go:117] "RemoveContainer" containerID="769089c185ea64985b113db33b6a6a22a8709903f9f9114772d13ba2ece9b31b" Dec 06 08:12:50 crc kubenswrapper[4823]: E1206 08:12:50.142325 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:13:05 crc kubenswrapper[4823]: I1206 08:13:05.142094 4823 scope.go:117] "RemoveContainer" containerID="769089c185ea64985b113db33b6a6a22a8709903f9f9114772d13ba2ece9b31b" Dec 06 08:13:05 crc kubenswrapper[4823]: E1206 08:13:05.143463 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14" Dec 06 08:13:16 crc kubenswrapper[4823]: I1206 08:13:16.140832 4823 scope.go:117] "RemoveContainer" containerID="769089c185ea64985b113db33b6a6a22a8709903f9f9114772d13ba2ece9b31b" Dec 06 08:13:16 crc kubenswrapper[4823]: E1206 08:13:16.141801 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7wlj2_openshift-machine-config-operator(69d0518f-7105-49e1-b537-f4de7b8f9a14)\"" pod="openshift-machine-config-operator/machine-config-daemon-7wlj2" podUID="69d0518f-7105-49e1-b537-f4de7b8f9a14"